The Facebook Dilemma | Interview Of Tim Sparapani: Former Facebook Director of Policy

The Facebook Dilemma | Interview Of Tim Sparapani: Former Facebook Director of Policy
The Facebook Dilemma | Interview Of Tim Sparapani: Former Facebook Director of Policy

Tim Sparapani was the director of global public policy at Facebook from 2009-2011. He is also the founder of SPQR Strategies. This is the transcript of an interview with Frontline’s James Jacoby conducted on March 15, 2018. It has been edited in parts for clarity and length.

What were you doing before you went over to Facebook? You were at the ACLU, right?

I was.

Yes. So what was your job? What were you working on?

I was leading privacy in the post-9/11 period for the ACLU nationwide, thinking about how to protect people in the midst of a whole series of extraordinary changes to how the government was treating its citizens, how it was vetting people, particularly people who were brown or who were immigrants or had ethnic names like mine, and helping them think through how they were going to protect themselves in this extraordinary period of additional scrutiny that the government was putting on. I was looking at changes in surveillance laws, how much information the government could gather about individual citizens, helping them think through how again they were going to protect themselves, and going to Congress and the White House and battling to make sure that the government didn’t take too much liberty with people’s privacy in fact so that they could have people be safe and free at the same time.

… Were companies [in Silicon Valley] also recognizing the fact that they were sitting on such valuable troves of data? I mean that, at that point in time, the government, whether it’s the NSA [National Security Agency] or intelligence community writ large, looking to get into their systems, it must mean they’ve got something of value, right? What was that? What was their understanding of their power of what data they had on people?

No, I think Silicon Valley was well ahead of the government in understanding the extraordinary power of consumer data. It was and is the engine that powers Silicon Valley. The companies understood that they had the most immediate access to not just millions of people’s information but tens of millions, hundreds of millions, and in many cases now billions of people’s information. They knew it was the lifeblood of the industry

Do you think that the – was government envious to some degree of that data, of what the Silicon Valley firms had at that point in time?

I wouldn’t say envious; I would say desirous. The government I think quickly realized that the best means of obtaining information was to try to get into the systems, so they knew they were in kind of a cat-and-mouse game with Silicon Valley. They had to both be friends with Silicon Valley and at the same time try to get access to the information. Meanwhile, the companies quite rightfully, and with the help of the ACLU and other organizations like it, were pushing back forcefully, saying: “We don’t want to be your handmaiden for surveillance. We can’t be beholden to you, the U.S. government or any other government, however important the role is that you play. You do a criminal investigation; you do national security; you do surveillance. We build wonderful tools for the public.”

Regulating Big Data

… What were the laws at the time that Facebook, for instance, had to comply with when it came to privacy?

… The answer is still basically the same today. There aren’t many, right? The privacy laws that a company like Facebook has to comply with are basically the doctrines set forth by the Federal Trade Commission. Don’t engage in unfair or deceptive trade practices. It’s a very simple, sort of golden rule. Don’t say that you’re going to do something with data that you’re not, and only do that with consumer data that you will do. Don’t do anything more; don’t do anything less. That still is and was the only real rule governing a company’s behavior with respect to data in the United States.

And was that of concern to you at the time, that there weren’t more stringent laws protecting consumers and their private data?

Absolutely. We’ve always wondered why. People like me have always wondered why there haven’t been more robust protections in consumer law here in the United States as there are virtually in every other country around the world, to protect consumers, to make sure that companies have more guardrails around their behavior. But the beauty of the current system, which I’ve come to appreciate over many years, is that it allows companies to truly innovate with data as long as they follow that golden rule, as long as they don’t do things that they say they’re not going to do, right? As long as they stay with what they’re supposed to do and stay within those guardrails, they get the opportunity to give consumers amazing new products and services. So the consumers should, and I think they do, routinely benefit far more than they are burdened by the sharing of their data.

But there is a reliance to some degree on the industry to self-regulate, right?

No question, yeah. Silicon Valley has always been in the position of self-regulating. It’s always been multiple steps ahead of not only the U.S. government actors, but those around the world. Again, these companies know more than anybody else about the power of this data, and they also know the important responsibilities that they have with customer data. … If they lose customer trust, they will disappear in a hot second. And I think all these companies have always known that, Facebook included.

Bring me to how it is that Facebook approaches you, and set me back in that time. You’re at the ACLU, and what happens?

Sure. So Facebook is this little itty-bitty thing. It’s the upstart darling company, but it’s fighting against other now less-well-known social networks like Myspace and Friendster, which were everybody’s favorite social networks at the time. This upstart Facebook is really trying to get traction. It’s growing with one narrow demographic of college students and recently graduated college students, and it’s kind of an urban phenomenon.

The company begins to innovate with some of the data that it’s getting and begins to make choices about some of the signals it’s going to send to advertisers as they’re trying to figure out how they’re going to monetize the system well, and … they announce a policy that says that they’re going to put beacons on people’s data, and they’re going to signal to companies when people mention that they’ve got a desire in their social media.

The backlash is extraordinary, and the company gets extraordinary pushback from the public in a way that it had never received before. I think that was one of the great wakeup calls for the leadership at Facebook. It was shortly after that moment, after the backlash over the Beacon incident, as it was known within Facebook, that I was approached to come and see if I wanted to help.

What was that like for you, I mean to get that call and to be asked to come in there and do that?

It was a real sea change. I had been spending so much time thinking about how to keep the government out of people’s lives, preventing them from doing extraordinary things post-9/11 to particularly minority communities in the United States. So it was a real change. I thought to myself, wow, I’ve had some extraordinary luck and success in pushing back against the government, and here is this opportunity to put in place some protections for the digital age. I couldn’t but help be excited about the opportunity.

Facebook And Washington D.C.

Describe the role that they wanted you to play in terms of protections and … what your job description was going to be.

Well, they knew they were getting somebody from the ACLU who’s totally into free speech and maximizing free speech, so they described it as the first head of government affairs – public policy in the language of Silicon Valley. Come in; help us build friends and relationships, not only with government officials but with regulatory officials, advocates in the privacy and consumer protection world who I had been working shoulder to shoulder with, and help us build this product in a way that people are going to be comfortable with it, and also so that we set in place the right policies to help the company grow.

How did it work that you were basically setting up the first D.C. office for Facebook, which in retrospect is pretty amazing. But you were setting up the first D.C. office at Facebook. What do you do when you hit the ground? What’s the strategy?

Well, truthfully, day one, myself and one of my assistants, we built our own furniture. I went and got a toolset, and we went to Home Depot, and I bought furniture, and we built desks and chairs, because this was a startup, and this is what you do in startup life. We had rented a space in Washington, an old, beat-up studio, and we began to build our own furniture, and that was my first welcome to the company: Here you are; grab a toolset; build your own furniture.

At what point did you actually communicate with Mark and kind of – didn’t he make a pitch to you to come over?

Yeah. The senior leadership of the company I think were more than aware about the risks to the company, to its growth opportunities if they didn’t get really core questions about privacy and speech and other government policies exactly right. They knew that the risk was extraordinary to this growing mission of the company, so during my recruitment process, part of my interviews, I interviewed with Mark Zuckerberg and Sheryl Sandberg and a whole bunch of the other folks in the very senior leadership team.

We had a meeting of the minds where the executives looked me in the eye, and I looked them in the eye, and we all agreed that these were important, quintessentially important values and that the company was going to invest deeply in getting the answers right on behalf of consumers everywhere, not just in the United States.

Invest in what deeply? What did that mean at the time?

To think about what the risks were to consumer data from this dawning thing called a social media company; to think about what it meant for people to be publishers really for, truly for the first time; to make anyone in the world instantaneously a publisher of their own lives; to be able to explain in real time what was happening to them, what their experiences were.

Incredible opportunity for free speech. Incredible opportunity to make citizens really democratize journalism in a way but extraordinary risks because of the data that the company would naturally be collecting.

And what were the risks? What were some of the specific risks at the time that were of great concern?

To me, coming from where I was at before, I was deeply concerned not only about the U.S. government but about foreign governments getting access to this information on Facebook, not only the information that people were publishing and making at least quasi-public, but also the information that was being collected in the background. I had deep concerns as did many within the company about government access to consumer data that Facebook had collected on behalf of consumers. That was at least one of the major concerns.

… Was there a strategy about how you were going to educate people on the Hill? What was the outreach program? What were the ways that you were going to set up shop? What was the strategy to begin with?

Well, we made it up as we went along, truthfully. But the answer was as Facebook-y as you can possibly imagine. We’re the people at Facebook who make friends. Go to the Hill, go to the White House, make friends. It truly was that simple. Go and begin and get people who are regulators, who are elected officials, who have their own political agendas, and help them understand that this thing, Facebook, is the opportunity for them to speak to a wider audience, to broadcast their political message, to get their agenda out there for the world to see in a way that had never been before, to communicate directly to consumers, directly to citizens. And so we went to them and said, “Here is your opportunity to speak for free, without intermediation.” And to a person, elected officials immediately seemed to get it. So we began to have friendships, in the Facebook way, very quickly.

People began to set up their own campaign websites on Facebook. They began to set up an office page for the office of this regulator or that senator or this congressman or congresswoman. Naturally there were people who were first adopters, and they in turn became the champions of this new tool and sold it to all their colleagues.

It’s a skeptical question, but was part of the point of making friends on the Hill, so that your friends don’t regulate you?

Absolutely. We thought immediately that our best strategy to help the company grow was to go and convince elected officials and regulators of the power of this tool and how it could advance their own agenda, by getting them involved, invested in using the service themselves for their offices to advance their agenda. It certainly was likely to diminish the chance that those entities would directly regulate against the company.

Yeah. So how big was Facebook when you joined, and what was the thinking about size at that point in time?

Sure. When I was recruited to join, we had about 300 employees. There have been many that have come and gone, stopped and started, and joined and left and thought, this Facebook thing, it’s never going to be anything, and moved on to the next opportunity in Silicon Valley. There were 100 million, give or take, users. At that point maybe 50 million, but really 100 million sort of active users, and there was this dawning awareness among a series of us that this was an enormous cohort.

This was bigger than the population of many countries on the globe, and these people acted and thought of themselves almost as their own digital world.

Writing The Rules For Facebook

So it was sort of like a nation-state of its own?

I think some of us had an early understanding that we were creating in some ways a digital nation-state and that we had to get the laws and rules and regulations in place in order to make this nation-state thrive, and also so that it could operate in relation to actual nation-states who had laws and rules and police forces and militaries and their own understanding about the rules for privacy and speech and human rights and conflict and etc.

… How did you even begin to draw up a kind of constitution for this new thing? What were the top priorities? What were the things that you were batting around in terms of what was OK and what wasn’t OK on Facebook?

It was world-changing and that the actions of even a few users speaking their minds would have worldwide consequence, and immediately some of us went to work to think about, all right, we have extraordinary scope; we also have extraordinary responsibilities. Let’s get to work, and let’s think through how we’re going to build this digital property in a way that respects people’s opinions and their political persuasions, their religious attitudes, etc., around the globe, and yet gives them the power to speak and share their own lives, and does it hopefully without violating too many international norms.

How do you even begin to do that? I mean, you get in a room with a whiteboard. What do you do in terms of creating that?

I did grab a series of people from across the company who were already bumping into every conflict across the globe – religious, political, social, you name it. Every single conflict around the globe was playing out through Facebook already at this point, so these people within the company in disparate corners of this already – you know, a small but growing company had begun to bump into these things.

They had already begun to sort of ad hoc make rules about free speech or privacy or human rights or how do we describe what a disputed territory is going to be called on Facebook? Is it Palestine? Is it the West Bank? Is it East Jerusalem, etc.? Is it Crimea? Is it, you know – you can name it. Every single disputed territory around the globe. We’d already had conflict.

And people were thinking about it. So I grabbed a group of people, smartest people I could find, and they were astonishingly bright. I was really lucky to be with them. We began in short order to sketch out a series of ideas about how this self-regulating thing was going to self-regulate and to create a hierarchy, if you will, of values.

Mark had given us the vision. We’re going to connect the whole world. We knew the speech was central to what we were doing, and emancipating people, giving people the opportunity to share their own experiences in their lives, that had to be near the top of the pyramid of rights and responsibilities. We’d just had this major privacy gaffe before I came onboard, so we knew that privacy had to be embedded deeply into not only the corporate culture but the engineering, into the sales teams’ work into the products they were going to offer to monetize this company.

Over a series of days and weeks we began to sketch out how these rights and responsibilities were going to intersect with each other, and sometimes they clash, so we had to make decisions about which of those rights we were going to give priority to. Was it going to be speech? Was it going to be privacy?

If we allow people to speak and they put themselves in danger, what do we do about it? What’s the corporate responsibility? We all felt it. We all worried about having dissidents use this amazing communications tool and perhaps bring them into greater risk. But we wanted to give them the tool to allow them to speak and explain their circumstances and to protest against repressive regimes around the world. It was a truly awesome responsibility. And a series of – [we] just simply sketched out what we thought were the baseline rules for this new nation-state of Facebook.

It basically also [was] what Facebook was going to tolerate on its platform and also what it might police in case it doesn’t meet these rules or standards?

That’s exactly right. We used the Terms of Service, the privacy policy as our guideposts, and we began to take the rules that were sort of embedded in their understandings that we had borrowed primarily from other internet Goliaths that had come along before us, and we began to iterate on those and change them and make them tangible, and then we set forth a series of roles to not only articulate some of these values: human rights, free speech, privacy, protection for the dignity of people, the opportunity for people to share their worldviews and experiences without fear of repression. We tried to bake that set of understandings into everything that was happening across the company, from the engineering to the product iteration to the way that we talked about this, not only with the U.S. government but with governments everywhere.

So what were the sorts of things that you weren’t going to tolerate on the platform?

Hate speech was clear and easy, right? Here are a group of progressive thinkers, people who value human rights, and we weren’t going to allow hateful speech that would incite violence to be part of the Facebook experience. We knew that people would push back against it. We would all recoil if we were the handmaiden of that speech.

And yet there were a series of us, myself included, who were committed civil libertarians. We’ve always believed that you drown out really hateful speech with truthful facts, with more speech, with clear evidence that whatever hateful thing is being articulated is not true. So we took a very libertarian perspective here. We allowed people to speak, and we said: “If you’re going to incite violence, that’s clearly out of bounds. We’re going to kick you off immediately.”

But we’re going to allow people to go right up to the edge, and we’re going to allow other people to respond. One of the great innovations here was that we asked the community itself, all of the communities on Facebook to self-police. That was one of the great opportunities that we had.

We emancipated, we empowered this 100 million-strong community and growing daily of users to set their own Community Standards and to say, “I don’t want nudity on the site”; “I don’t want vile speech”; “I don’t want x”; “I don’t want y,” and to report that into the company. And the company could take technological measures to take down content speech that was inappropriate.

In some ways we didn’t have to build all the constitutional norms ourselves, if you will, for this new digital nation-state. We could rely on the sentiments of this vast user base to decide what the rule should be for this company.

So at the same time was the company putting resources into figuring out how to police and moderate content on the platform?

In fact the company had begun to do so from day one, well before I arrived. Facebook was being built in the shadow of Myspace, which had allowed virtually all, anything to flourish online that people wanted to put there. It had begun to feel on Myspace like the Wild West. There was nudity; there was vulgarity; there was everything you could imagine people were publishing on Myspace. And a lot of people had recoiled against it because there didn’t seem to be any balance; there didn’t seem to be any rules, because people were anonymous on Myspace. On Facebook you had to show up under your own name, and that’s really central to what happened at Facebook.

We found very early on that when people were responsible by name for what they were publishing, the conduct was different than it was on Myspace. That helped a lot. Similarly, the company had moderators from day one so that people could begin to self-police this community, this digital community, to drop norms. Think of it as a civic watch for the internet, and people took their responsibilities seriously. So reports came in, sometimes by the thousands and then by the tens of thousands and soon by the millions, and a whole team of people would sit and spend all day every day simply reviewing the reports of content to see if they should be taken down, left up, whether they violated our Terms of Service, this growing understanding of our sense of shared responsibilities or community norms, borne out through that Terms of Service and other documents like a privacy policy.

Were there any concerns at the time about lies, about the spreading of lies on the platform, about misinformation or disinformation?

I think a lot of people early on, unlike what’s been said, were deeply concerned about the misuse of this amazing new technology. I think people were – at least a core group of people I was with – were thinking hard about how media, any new media, is invariably manipulated and misused, and we were trying to build in safeguards from the very beginning to prevent that from happening. One of them of course was this community policing, right? We wanted facts to beat out lies. We wanted real data to counteract nonsense.

What you’re describing sounds kind of like this utopian free speech environment to some degree. … How big an experiment in free speech was this at the time?

This was the greatest experiment in free speech in human history, and yet technology realists like myself who had seen the ugly sides of government behavior, government surveillance, misuse of technologies, were worried very early on that this utopian thing that was being built would be misdirected, would be abused, could be manipulated, so we began to build in safeguards from the very beginning.

And what were those state safeguards there to make sure that those things didn’t happen, that manipulation did not happen?

Well, some of them were very direct. We said no when a bunch of governments came and asked for access to the site, not just the U.S. government. Governments all over the world I think had a quick wakeup call that their citizens’ behaviors, their actions, their speech were all playing out in real time through this thing called Facebook that was growing by the minute. Simply by saying, “No, we will not be part of your surveillance apparatus,” and really making sure that on the security side the team was policing it to prevent intrusions, to prevent governments from surreptitiously getting access, we gained a lot of room to let this thing grow and evolve and breathe, but without the unfair, the scary influence, the dangerous influence of foreign governments or the U.S. government.

The Arab Spring

OK. So Arab Spring. Situate me inside thinking at the company as you’re watching this go down in Tahrir Square and elsewhere.

Let me take you back even a little further.

OK.

FARC [Revolutionary Armed Forces of Colombia], the Communist organization in Colombia, had been engaged in a decades-long civil war in Colombia, and almost overnight, there was the organization, an organic organization to come to the streets and push back against FARC and to say no more from the average Colombian. It happened through Facebook, and it was the proof point that many of the things that my team, the other good people around the company, were trying to build, had incredible value. It allowed people to organize en masse … without anybody interfering, to come to the streets and speak their minds. You have this moment when hundreds of thousands of people in Colombia flood the streets in protest against FARC, so that had comforted many of us, that this thing that we were doing, while it had incredible risks and that we felt the responsibility on our shoulders to get this thing right every hour of every day, that we were doing something right.

The Arab Spring occurs, and it feels like Colombia 2.0 but on an enormous scale. Suddenly the Arab world is coming to the streets to throw off repressive regimes, and they’re doing it not just in one country, but they’re doing it across the Middle East, and they’re doing it through Facebook. And as soon as governments are trying to shut off the internet, people are going and finding new ways to turn Facebook back on to get the back up to the internet so that they can continue to organize around government repression. It felt like, to me and to other people like me, like this thing, Facebook, had extraordinary power, and power for good. I remember feeling elated to see people use this tool, a free tool, to do things that they could never have done before, to organize, to share their world, to show violence that was being foisted on them by people in their governments who are trying to prevent this uprising. And they were showing it in real time, and they were displaying it to the world and the news media.

Here we have this incredible new paradigm. The news media is taking clips from Facebook in real time and popping them up on CNN. It can’t get any more real. It can’t get any more real time than that. So this tool that I have had a little tiny part of building is citizen journalism in its highest form.

So from your vantage point inside, you remember kind of anecdotally what it was like for all of you to kind of watch this go down.

Yeah. Now I was elated; my colleagues were elated. This seemed to prove that despite all the criticism about Facebook and some of its missteps that we were doing something really important, and we were trying to guide it as best as we could. So simultaneously we feel, many of us, elation, that we’re empowering people to fight back against governments. It’s real. And yet there’s this extraordinary responsibility that many of us felt to make sure we didn’t mess it up, because we knew that just as easily as these citizens were using these tools to organize in the streets, governments could try to manipulate those Facebook feeds and do dangerous things on a scale and scope never before believed. So elation and responsibility, and I felt them – my team felt them – in equal measure.

Disinformation And Misinformation

But what actual steps were taken to mitigate the risk that an authoritarian regime could in some way use the vulnerabilities of Facebook and spread misinformation or incite violence or what? What steps were taken at the company at that point?

One of the things the company did was to work on security in the backend. There were all sorts of efforts to prevent those governments from having access on the backside. Some of the best security people in the world were working to prevent repressive regimes from breaking into Facebook and manipulating what was happening or getting access to people in real time in the world. We also said no to a whole bunch of requests. Frankly, all the government requests from those repressive regimes, we flatly said, “No, we will not cooperate.” There was an understanding that growth of the company, while it was paramount, had to be measured against this extraordinary set of responsibilities to make sure that we weren’t endangering people. In fact, I can’t recall a time during my tenure when people didn’t say, “No, we aren’t going to do that; we will not play ball with that government or that government or that government, because we know the risks are great.” Nobody cared to make a buck off of empowering the regime in Iran or the government in Myanmar. It just wasn’t something we were going to do.

Was there a concern about policing misinformation on Facebook, especially misinformation that may be a propaganda tool for an authoritarian regime?

There was. But the primary tool was that we were empowering people to be their own citizen journalists around the world. We believed at the time as a group, if I can typify what a whole group of people think, the citizen journalist concept would unmask lies, and we believed that people, given the option between truth and lies, would pick truth every time. So we relied heavily on this community policing concept. We relied heavily on the citizen journalism concept, and we thought those were the best cures. There’s always been manipulation of media by autocrats and dictators. Every new media since the Gutenberg Bible and the Gutenberg printing press has been misused, radio, TV, now social media, but we believed that unlike those prior iterations, those prior innovations in media, that because we were giving people all over the world the power to speak their minds and explain their world in real time for free, that we had found a way to counteract that set of lies, that sort of propaganda. And we hoped fervently that that would be enough.

Facebook And Washington D.C.

How big was the D.C. office growing? You started it out. Was it growing? I mean, who are you hiring? Describe what did the D.C. office become the hub of at the time.

The D.C. office grew the way I think cells divide. You have one person, and then over time, and it takes a while, you get two, and then you’ve got four and eight and 16 and so on. That was important, because as the company was growing its digital footprint around the world, and hundreds of millions of people are being added organically because they’re choosing to join Facebook year after year after year, the public policy team has to grow to be able to respond to all of these requests. They’re coming in from advocacy organizations of advocating every opinion on every issue you can imagine – every elected official anywhere, any government everywhere, and they are thirsty for knowledge about Facebook, and they want to influence its growth and direction, and they want to have whatever their political agenda is aligned with or directing Facebook’s growth.

The growth of the public policy team never kept up with the growth of the user base. It certainly didn’t keep up with the growth of the sales departments or the engineering department. In many ways, it was the small, tiny little department that had to do a whole lot with not very much in terms of resources, because it’s a cost center. The rest of the business is making money. This thing is trying to help set rules about how Facebook will intersect with norms and laws and regulations around the world.

What’s the effect of investing less in the public policy group than the growth centers in the company?

Well, we had some really good people and people who really were conscientious and thoughtful and worked extraordinarily hard to get these really novel, new questions that the world had never seen before ironed out in the best way possible. You had incredibly engaged people from across the political spectrum, all political spectrums, who were joining the policy department and the legal department and really engaging deeply on the sort of questions that play out everywhere and in human life – all the political, religious, military questions around the world, every geographic dispute, every dispute of ethnicity and tribe and race, all playing out on Facebook, and these people are dedicated to trying to iron out things that the rest of the world doesn’t know how to iron out.

There must have been some effect of investing more in growth than in the public policy team that’s kind of ironing out the difficult challenges and risks on the platform, right? I mean, what was the effect of that?

No one could ever have predicted how fast Facebook would grow. The population of Facebook users was growing almost geometrically, and it was accelerating, and no amount of planning or investment could have led to having a suitable enough, a sizable enough team to respond to these inquiries. I mean, these are age-old, ages-old conflicts, and they are playing out in the digital world in the same way they do in the real world. We worked like crazy to get it right, and people were highly leveraged and highly committed, and generally I think they got the questions right. Then we had a moment of inflection, and every system, every plan that we put in place with the company to sort of scale up was obviated by the growth. The trajectory, the trajectory of growth of the user base and of the issues was like this, and of all, all staffing throughout the company was like this. The company was trying to make money; it was trying to keep costs down. It had to be a growing concern.

It had to be a revenue-generating thing or [it] would cease to exist. Of course everybody in startup land knows this story. You always underinvest in people. That’s what every startup everywhere does. Facebook was simply in this respect guilty of being another startup, right? No one could have predicted that it would be this worldwide Goliath, or that suddenly the population would grow by hundreds of millions of people almost monthly.

The Facebook IPO

I want to explore this moment with you, because this is so important. … If you could be as specific as you could possibly be about the sorts of public policy measures that could have been in place that maybe would have retarded growth to some degree, that you would have liked to have seen, or your team would have liked to have implemented, that just simply became impossible to do, or the company was unwilling to do because it was growing so quickly?

I think one of the big moments happened after I left Facebook. We had always made the decision, because of the early privacy gaffes, the ones that predated my arrival, that it predated the public policy team being a team at Facebook, that we were going to treat data of consumers as sacrosanct, and that we knew that we had to gain and keep the public’s trust with respect to data they were volunteering to us and that we were doing with that data only that which they wanted us to do with it and nothing more. Then at some point, I think in an attempt – and I can’t say where it happened, because I wasn’t there – but I think there was an attempt to make the ad revenue really pop, and people began to think that they needed to augment the data. And so there’s this extraordinary thing that happens that doesn’t get much attention at the time.

About four or five months before the IPO – I’ve just left the company – and the company announces its first relationship with data broker companies. Facebook is quietly partnering and obtaining data that these companies are getting from other places and people’s lives, both online and offline, and they’re selling it back to Facebook. and Facebook I think is then turning it into part of their advertising tool to augment the already extraordinary data that people are volunteering with Facebook. That change alone I think is a sea change in the way that company felt about its future and the direction it was headed. At that point it becomes important to make the company a blockbuster success, financially.

The IPO is months away, and the revenue accelerates, and Facebook begins to take enormous market share away from all of the other online ad platforms that are opaque, that nobody knows about, that are making oodles of money doing whatever they want in an unregulated state. Now Facebook has decided they’re going to beat them.

Explain it to me as if I’m like a kindergartner, seriously. Explain to me what you identified as being problematic about, first of all, who these data brokers are and what that really signified about a change in ethos at Facebook.

Yeah. There are all kinds of ways to define what a data broker is, but there are a series of companies that most Americans aren’t at all aware of that go out and buy up data about each and every one, what [all] of us buy, where we shop, where we live, what our traffic patterns are, what our families are doing, what our likes are, what magazines we read and so forth. They collect that data, and they sell it to the online ad world. Every company online it seems buys data this way; every retailer buys data from these data brokers. It’s the power behind much of the consumer retail online experience. It empowers these companies to provide predictions about what you’re going to buy and what you’re going to do and where you’re going to do it where you’re going to spend your money in particular.

The way that you said it, they exploded. Something changed in the nature of the company that the trust that they had with their users changed because they’re now gathering up even more data that they didn’t offer up to Facebook. …

So the marrying up of data from Facebook with data from other sources changes the nature I think fundamentally of the relationship between the average Facebook user and the company. At that moment, the company is augmenting data, and they’re augmenting it with data that the consumer doesn’t even know that’s being collected about them because it’s being collected from the rest of their lives by companies they don’t know, and it’s now being shared with Facebook so that Facebook can target ads back to the user. Now, those ads are, in the language of Silicon Valley, better targeted. They’re more personalized, they’re more customized, and therefore more beneficial than other ads. But something has changed. The user, the average Facebook user will never understand, I believe, that the data that the company’s getting at that moment is not entirely controlled by them, and that’s a sea change in the way that this company relates to the consumer.

Where the consumer basically becomes their product and their client becomes the advertisers?

This has always been an ad-driven ecosystem, so I don’t know that that’s the change. If you think about it at least from my perspective, all journalism is effectively an ad-driven vehicle, right? Almost all of it is empowered by the attempt to sell ads; everything on TV is simply in many ways a vehicle for ad service to the consumer. Otherwise we wouldn’t have content on TV. The same is true with radio.

… I think it’s fuzzy thinking to think that somehow the average Facebook user at that moment becomes the product. I’ve always recoiled at that language. I don’t think that’s the right way to think about it. I think what’s lost is the control that the company had given to consumers to understand their data when shared with Facebook would be used to provide them with products and services or advertisements for products and services that they might want, but based on data that the consumer was voluntarily sharing through Facebook. The addition of information that has nothing to do with your activities on Facebook causes a loss of control for the consumer, so the consumer can’t say no in a way that they would have before. And of course the ad revenue spikes, right, because the ads become ever more potent, because the company suddenly knows a whole lot more than it already did about each and every consumer and can make better predictive models for all the companies out there that want to advertise to consumers.

… What signs were there when you were there in the lead-up to the IPO of a new direction for the company? How did you feel that? How did you sense that?

Well, we began to bring into the public policy team senior people who had been in high roles in the White House in both parties, and the public policy team grows. We bring in veterans who have been there and done that, and those people have a different mentality about what Facebook should do as engagement in this realm, the political realm, and that changes the approach almost immediately of the public policy team worldwide.

And the substantive difference between those who want to engage in the new guard is what? How do they think about it differently?

I believe there was a sense that there was too much risk and that by friending everyone that Facebook was exposing itself, so there was this sense of disengagement that began to occur, and I think probably it was the right thing to do for the investors, for the shareholders. It obviously has had consequence.

So there was less of an interest in engaging with solving problems? Is that what you’re saying?

Yeah, and I think there was, quite simply, we went from being everywhere and meeting everyone and making friends with everyone to I think sort of pulling back, retrenching, choosing which interactions we were going to have, choosing very carefully what moments that the company would speak about these issues.

Who was the architect of that policy?

Well, that’s hard to say. It was not me. …

Was part of the decision, you think, to disengage from really difficult public policy decisions that Facebook may have been a part of and social problems that Facebook may be a part of? Was the decision to disengage from that in part Sheryl [Sandberg’s]?

I think the whole leadership of the company made a conscious choice to step back. I think one of the signature moments was a dispute with Sen. [Charles] Schumer (D-N.Y.) that happened about a year before the company went public. Sen. Schumer, who had been largely supportive of the company, felt like the company wasn’t being responsible. He came on a radio program one Saturday in New York City and gave some remarks about Facebook and then brought the company and its leadership to Washington to talk about its responsibilities and its role, and I think that that conversation with a senior truly powerful senator had quite an impact on this, the very senior team at Facebook, at least from my perspective. And I think there was a genuine sense that for the company to survive and thrive that it had to change and grow into something that looked more like any other traditional company.

As opposed to a company that did what?

As opposed to a company that was now not just a company, but a company that allowed the world to connect; that they gave people the power to publish anything about their lives in real time and share, and there [was] something special about that company, and I think the mission became less important at that point. To my mind, Mark’s vision of connecting the world was not set aside, but was balanced out with the pressures brought by a lot of external forces to make sure that the company successfully launched on the public markets.

Is Facebook Too Big?

… Is there a point where the company had grown so big and the scale had become so vast that it basically says it’s impossible actually to police or moderate or arbitrate for certain things that are happening on the platform?

That happens years before this moment. The company is growing so quickly that every system put in place, every technology, every process put in place to ensure fidelity with core values is tested, hourly, by the growth. There is no way to control the pressure that that puts on any company. No company in history has ever grown as fast as Facebook has. There is no company out there that has two-sevenths of the world’s population using it, and almost one in every five users, people on the planet, using it every single day. There is nothing like it. There’s no parallel; there’s no system. There’s no process they can respond to having one-fifth of the world’s population using a service.

Was it sort of like a dirty secret in some way that this had grown too large too quick and it was kind of uncontrollable?

You know, I think a lot of people internally were challenged by what to do. Facebook is a story about engineers, many of whom are utopian, and these utopian engineers who had wanted to build this wonderful thing to give people the power of speech, to democratize speech, were left with this problem that even their best coding couldn’t keep up with the pressures brought by extraordinary growth, unprecedented growth.

And those are basically the social problems that exist in the world writ large.

Right. Facebook is a mirror of the world. It is a reflection back on all of us, of everything that can happen in our lives – every dispute, every conflict; all the good stuff, too, but all the conflicts. And those things are intractable, right, in many cases. Or Facebook is limited in its ability to solve things that nobody else can solve either. I don’t fault the company. I don’t fault the people who made decisions after me, that they didn’t know how to handle the world’s disputes. I’d love to see someone who could do better. It’s hard for any of us to judge now what it’s like, the power, the responsibility, the pressure to reflect the world as it is, to have people speak honestly and forthrightly and share with each other, and tackle all these problems. No company is equipped for that. There isn’t a government in the world that’s equipped for that.

… So as you’re expanding globally into places that are rife with conflicts, there isn’t necessarily the commensurate growth of people inside the company who are able to in some way deal with that.

Yeah, that’s exactly right. There is not commensurate growth, but I don’t know what commensurate growth would look like. I don’t think anybody knows how many thousands of people does it take to police disputes between the world’s religions; how many thousands of people does it take to police disputes between nation-states. I don’t know. I don’t think anybody else knows.

Section 230

The Section 230 of the ’96 act or –                    
The Communications Decency Act. Exactly. … Can you explain what that did and how important that was to the growth and the responsibilities of Facebook?

So Section 230 of the Communications Decency Act is the provision which allows the internet economy to grow and thrive, and Facebook is one of the principal beneficiaries of this provision.

It shields companies that are platforms from the actions, or liability for the actions, of people who are using those platforms. It says, “Don’t hold this internet company responsible if some idiot says something violent on the site; don’t hold the internet company responsible if somebody publishes something that creates more conflict, that violates the law, that fails to abide by tax laws or local laws.” It gives the platform some immunity from liability. So you see all the major tech companies are all platform companies, and they rely on Section 230. It’s the quintessential provision that gives them the room to grow, that allows them to say, “Don’t blame us;  go regulate the speaker; don’t blame us; regulate the company that’s breaking the law.”

And how important was it? … Inside the D.C. office, for instance, how important was it? How important was 230 to you?

It was everything. It would still be everything for any of these internet companies to go to the mats to fight any attack on Section 230. It is the provision that allows the company to grow and thrive and to attract entities to do work through it, to other companies to take action and consumers to publish on it, individuals to speak their mind and really engage in the internet. Without 230, much of the internet ecosystem disappears, becomes less vibrant for sure. So every internet company understands that they have to fight like mad to protect 230 at all costs.

Was it also used as a shield from responsibility, for taking responsibility for what was happening on Facebook’s platform?

I think every internet company has used 230 as both a sword and shield, right? They use it as a means of fending off responsibility, and they use it to prevent regulation from being enacted because they can point to the federal supreme law that blows away state laws that might otherwise begin to regulate the internet. So yeah, it’s essential.

… But a media company bears responsibility for the information it puts out in the world, and when Facebook becomes the largest news source and the largest source of information in our society, then do they not bear any responsibility for the information that they’re feeding to people?

I think every media company has an editorial responsibility, and one of the things I think that’s happened is that Facebook has realized belatedly that they need to have an editorial viewpoint. [Facebook] can’t simply allow anymore, apparently, people to speak in any way that they want and rely on Community Standards and community policing. They have to venture back into the fray. They have to have an editorial viewpoint the same way The Wall Street Journal does or The New York Times or any other news outlet [does].

Policing The Platform

When you were there, were there discussions about Facebook’s responsibility for an editorial viewpoint? When you were there, was there discussion about Facebook actually taking on the responsibility for the veracity or the quality of that news or content or information that they’re feeding out to people?

Certainly with respect to ads. We tried to police junky ads, mostly because it was a bad experience for people, to get weight loss pills thrown at them day after day or fake snake oil health cures. Nobody wants to see those ads. Facebook was actually pretty good about deciding not to take that revenue early on. But it was mostly a question about what kind of ads they were going to be rather than what messages would be allowed. We had the Terms of Service that said: “Don’t do the following things, and you’re free to do what you want. And by the way, community, if you don’t think this is right, report that content to us. We’ll take a look at it and see if it violates our Terms of Service.” But it was very libertarian, very laissez-faire, and that has always been the ethos of the company.

But why now do you think that that’s not something that they can do, that that’s something that’s has to change?

Well, I would prefer that it stay the same. I really don’t know that we can ask Facebook executives to substitute their judgment and to have people who aren’t trained in journalism to act like journalists. Back then, Facebook didn’t have a voice; it didn’t speak as Facebook, right? I think of traditional journalism as having an op-ed page where the editorial writers get to put out a viewpoint, and they speak and they own it. Facebook never spoke to its user base; it didn’t interject itself into the stream of activity. Rather, it surfaced the actions, the comments of the likes, of people in your lives, the people who you cared about, but it wasn’t Facebook showing up to say: “Hey, we have a perspective here. Here it is.” That only happened under extraordinary moments. …

In the early discussions about what content was going to be OK or not OK on Facebook, was there a discussion about arbitrating for the truth, or a discussion about people spreading lies or rumors or hoaxes on the platform and what responsibility the company was going to have to stop that?

Yeah, the company wanted engagement, but it didn’t want it at any cost. And the company I think had some humility. The people who were at the company at the time recognized that they couldn’t police the content of the world. They had to have some humility. We had to set up some ground rules, basic decency, and no nudity and no violent or hateful speech. After that, we felt some reluctance to interpose our value system on this worldwide community that was growing. We wanted a safe place, a trusted place for people to come. But we knew that we would never be wise enough to make the right call, that none of us were omniscient into the company [and] couldn’t act like it was, and I think there was a reluctance to engage in content moderation. In many ways, we pushed it out to the consumer who were users and said, “No, no, you decide the community that you want, and you tell us what’s out of bounds, and we’ll police against that.”

Were the people that were policing the platform given the resources they needed to police it properly?

I think no one at any of these companies in Silicon Valley has the resources for this kind of scale. No one could have imagined how the growth would happen. You had queues of work for people to go through and hundreds of employees who would spend all day every day clicking yes, no, keep, take down, take down, take down, keep up, keep up. They were continuously reviewing every image that you can imagine, every kind of speech you can imagine, and making judgment calls, snap judgment calls about does it violate our Terms of Service? Does it violate our standards of decency? Does it make Facebook a trusted place or an untrusted place? Does it make it a safe place or unsafe place? What are the consequences of the speech? So you have this fabulously talented group of mostly 20-somethings who are deciding what speech matters, and they’re doing it in real time, all day, every day.

Isn’t that scary?

It’s terrifying, right? That responsibility was awesome. And yet the opportunity to direct the digital world in a better way, to drown out bad speech, hateful speech with all this amazing speech, to show the lies of repressive regimes by publishing the truth was such an extraordinary opportunity that it outweighed that terror, and it made us all dig in and try to find the way through to get to the right result.

… Was there not a concern then that it could become sort of a place of just utter confusion; that you have lies that are given the same weight as truths; that you have no arbiter of veracity and that it just becomes a place where truth becomes completely obfuscated?

No. We relied on the public’s common sense and common decency to police the site.

We believed – and I think it’s still right – that if you put the community at work and empower it to judge the quality of a statement, the truthfulness of a statement, that they’ll find a way through. Yeah, there are going to be hiccups; there are going be painful hiccups. We’ve seen some, but there’s also the opportunity for the public to bury hoaxes in mountains of fact. And there was a real sense that this self-policing community was capable of finding fact and ferreting out lies.

But what if you’re in your own filter bubble? What if you’re only seeing what it is that the algorithm thinks it is that you want to see, and you have no idea about the totality of information out there, but you’re only seeing what plays to your own bias?

I don’t think that there was a full understanding, the way that there’s beginning to be now, about the power of amplification of messages within a closed community. But I think it unfortunately reflects societies all over the world. People talk to the same people. They intersect and interact with people generally who share their worldview. Unfortunately, Facebook mirrors what the rest of the world is like. It’s not different in kind and quality, and I think to say that Facebook is different is to absolve everybody of responsibility they should have in their own lives to break out of those echo chambers. But those echo chambers exist, and the digital world and Facebook in particular simply replicate them.

Facebook doesn’t just mirror society and echo chambers. It amplifies them, doesn’t it?

Well, I think it does now, I think the algorithm is different. It wasn’t what it was. I’m sorry it isn’t now what it was before. The algorithm I think seems to have produced some sort of extraordinary echo effect that builds a crescendo that in some ways blocks out contrary viewpoints, and something will have to change. We’ll have to go back to a different system.

originally posted on pbs.org