The Facebook Dilemma | Interview Of Andrew Anker: Former Facebook Director of Product Management

The Facebook Dilemma | Interview Of Andrew Anker: Former Facebook Director of Product Management
The Facebook Dilemma | Interview Of Andrew Anker: Former Facebook Director of Product Management

Andrew Anker was a director of product management at Facebook from 2015-2017. This is the transcript of an interview with FRONTLINE’s James Jacoby conducted on May 18, 2018. It has been edited in parts for clarity and length.

I think that in some ways, this may be best if we go chronologically, but we can, if you want to jump around, that’s fine with me. One, I just would love to know a little bit about your background. One of the things that we talked about at lunch was that you kind of speak the language of publishing, and you also speak the language of tech, so what was it from your background that gave you both worldviews?

I grew up just loving media. I mean, I literally started a family newspaper when I was like, 11. I was the editor–in-chief of a newspaper in college-in high school, rather. There’s just been something about media, and I didn’t know anything about [Marshall] McLuhan, and I didn’t know anything about the ideas that he talked about, but that idea that media was created in that media was something that you could understand, for some reason I just got a very early glimpse of that and just became convinced that anything I wanted to do was going to be media-related. At the same time, you know, I’m just the right age that at 15 we got an Apple II at our house, and I went right from playing to programming. And I had that kind of mind that really got how programming works.

When I eventually met Nicholas Negroponte and learned all about his ideas of convergence and everything, I didn’t realize again explicitly that that’s what was in my head, but that idea that these are really the same thing was just part of my woodwork growing up. So when-I started out actually in finance and banking, I then did a startup where I was mostly an engineer, but in that process I just sort of fell in love with the tech startup mentality, and that was what made me move out west. I was born and raised in New York City, and I just started looking at tech stuff, not really media, until the Wired business plan hit my desk. When I saw a business plan in 1992 that talked about not just tech but the idea of the culture of tech, the importance of tech as a-you know, as we said at Wired, not just about speeds and feeds but about the people who built the stuff, I saw my worlds converge at a very early age, and I just without really explicitly thinking about it made a career out of playing at that borderline between the two. When I was as hired as the original CTO [chief technology officer] of Wired magazine, and a lot of what I was supposed to be talking about was just the theories behind the technology, behind the magazine’s area of focus, but then it became, what do we do online? And so that turned into this project we called HotWired and Wired Digital. It turned into needing a business model.

Again, having that media background, I was able to bring advertising onto the internet, for better or for worse, and it really just melded everything together in a way that-and I wouldn’t say I’m the only one, but that made me realize that they could put me in a room with New York media people. I knew how the model worked, I knew how to fund these kinds of companies, and I knew how to build them. But then I could sit down with my engineers, and when they would ask them questions or when we would talk about things, we could talk about how to normalize a database for the web, which actually ended up being really different than how you normalize a database for software. It just became this really fun thing that we made up by ourselves at the same time as we were trying to figure out how to build business and how to talk to people.

Journalism And Silicon Valley

…How would you characterize the understanding out here in Silicon Valley of what journalism is?

Yeah. I want to be careful, because I don’t want to really throw this whole valley under the bus on this. But at the same time, I think we’ve lived in an environment over the last 20 years where the media is changing at the same time as technology is trying to understand how to grapple with media. Some of the older of us who grew up with Walter Cronkite, who grew up with this idea of media as a very serious public interest part especially news, as a very serious public-interest part of being a media company has been subverted by a lot of what’s gone on with cable news and some of the other areas where it’s been a lot less serious. And so I think news and media in general is changing at the same time as a lot of the people who are the most critical distributors of media, like the companies of Facebook, Twitter, Google and whatnot are trying to grapple with a moving target.

It’s made it difficult, because we don’t always speak the same language. But I don’t think even what we would be speaking about is the same anymore, so we really end up in a place where we could be thinking we’re having the same conversation using the same words and yet where we end up with that my language in those words is in very different places.

Like how so? What do you mean more specifically in terms of the same words can mean different things to different groups?

Well, I hate to get too semantic, but even what is news, and getting into the actual idea of news-and I think you talk to some of the older guard of which, again, I grew up in-news is a sacrosanct thing with a capital N, and it’s unbiased, and it’s about the truth, and it’s about reporting things that have a very explicit journalistic process. It’s very well thought through, and journalism schools teach it, and it’s understood as a very concrete and discrete thing.

In a world where only the people who could practice that kind of news journalism could get jobs at big entities that had to go through getting FCC licenses or had to build big printing presses, there became a gatekeeper on what is news. There became a gatekeeper on how to practice news that meant that you were unlikely to interact with news that was anything other than quote, unquote“serious journalism.”

…I think we live in a world where the people who think they’re journalists and the people who think they’re creating news are much wider than what others who maybe are in the old guard would think of [as] news and journalism, so those words can mean very different things to certain people. If I’m tweeting like our current president does, I could be creating news or I could be talking about what I ate today. If I’m writing a blog, it could be something that is nothing interesting, and it could be something that is majorly important.

…So we have a very confused world, especially if you’re a platform trying to then make some decisions about what is real and what is fake; what is good and what is bad; who deserves to hear what and what kind of filters should we put in front of them.

When you said among journalists there is a feeling as though what we do is to some degree sacrosanct, …do you think that, especially among young tech engineers in a place like Facebook Is that what’s understood as journalism, as something that’s sort of a sacred public trust?

I don’t know. I’m not sure that it even matters to some extent, because when journalism is allowed to be everywhere and sit right next to pictures of my family or my dog or what I ate yesterday, even if we do understand as technologists what is journalism with a capital J and what it’s not, it’s not always clear how you differentiate or even if it’s anybody’s role to differentiate. It used to be easy. A journalist was hired by a major newspaper or a major TV network, and they practiced journalism. And if you saw it on TV, you could count on it being something that had some level of fact checking, had at some level of rigor behind it. At this point, even if you know what journalism is, it can be everywhere. It can be a tweet. It can be a blog post. So there’s not a lot of intermediary value that anybody’s providing, even Google and Facebook that says: “this is something serious” versus something that’s not. By having such a blurry environment-and at the margin it’s always been difficult to define journalism even with the normal rules about what journalism is-you can really get into a world that it’s almost an irrelevant question.

Social Media And The Business Of News

I’m kind of curious if you could in your view tell me the story of how the platforms like Facebook got into the news business in a way.

I would say Facebook enabled people to communicate with each other, and news became one of the things they wanted to communicate with. Many of the feature sets that you see at companies like Facebook-and this is before my time as a product person-the features are developed almost after the fact. You see a use case has already happened, and then you say, “How can we make this use case better?”

And by “use case,” help me. Layman’s terms.

Sure. Whereas Facebook might have been designed for you and I as friends to share pictures, if I decide to share not a picture but a link to a New York Times or Washington Post story, I’m not necessarily using that feature the way it was designed, but it’s a perfectly valid way to use that feature. So what happened at Facebook is that somebody saw that kind of distribution of news content and said: “What can we do to make it better? Rather than just put a picture in, let’s go get the headline. Let’s go get some aspect of the actual article itself and add that to News Feed.”

This was before my time at Facebook. I think one day someone woke up at Facebook and realized that a lot of the news industry’s traffic was coming from Facebook a lot more explicitly than anybody actually said, “Let’s go after the news industry.”

Did the rise of Twitter have anything to do with that as well in terms of sharing of news and the sort of competitive environment in Silicon Valley, for instance?

This was before my time at Facebook, but I have no doubt that some aspect of it was the fact that there was already that a lot of that going on, a lot of that news discussion going on on Twitter, and that became a sort ofinteresting thing to have on Facebook. But at the same time-and this is my experience as just a user-I would share a story on Twitter, and I’d get one or two responses. I’d share a story on Facebook, and I would get a ton of responses, whether it was a personal story or whether a news story. I think a lot of what happened to Facebook was that it just was a much better environment for that actual discussion with people you care about.

And news is a good thing to talk about, so I would say that more of Facebook’s news footprint, so to speak, is because of where users took the platform than because of any competitive response to someone like Twitter.

Engagement And Facebook

When engagement is the metric, right, and when it’s all about what your users are engaging with, is what you’re saying basically that Facebook learned from its users that news was engaging?

Yes. I think news is always something that’s interesting. It’s always something that we talk about. One of the things about Facebook is if you and I as users posted 20, 30, 40 times a day, we would be all Facebook needed. But when we don’t and The New York Times or The Washington Post does, it becomes a much easier way to see that kind of engagement and that discussion happening, more so certainly than some of the friend-sharing stuff that happens on Facebook as well.

…On the flip side of it, news organizations are starting to recognize something at the same time about where their audience is and how they’re getting their traffic and where their advertising dollars are going as well. Tell me about that shift in time.

I was on the outside of Facebook for the last 15 years really up until 2015 when I joined, and that era from really 2000 to 2015 was really about finding traffic. I think by 2000 everybody realized that the internet, the web, was the future of news. Certainly then with mobile, it became that much more so. But nobody understood really how to get traffic. And then Google happened, and what Google did was really became this massive fire hose of traffic, and the whole industry of Search Engine Optimization came out to really play to that factor.

Then Twitter started, and in 2007, 2008, it really grew to become not as big as Google for most properties but a really important source of traffic, especially on some of the higher-end serious pieces where journalists really like to talk about things. I think Facebook sort of came out of nowhere. Then all of a sudden, 2013, 2014, everybody started to look at their logs and realize that more and more of their traffic was coming from Facebook. And at a time where Google had already been around for 10 years as a source of traffic and SEO had been played out in a lot of people’s ways, they didn’t know how to get more traffic from Google. They didn’t really understand how to drive Twitter. But all of this on Facebook was just this straight line up for traffic, and it played well to what news did and engaged people really well. So really from I would say 2013 to 2015, everybody was putting as much effort as they could into their Facebook traffic, and Facebook was very willing to help with that.

On some level it was good for Facebook but not so great for the media companies in that in part it would drive traffic, but a lot of the advertising revenue would remain on Facebook, and essentially almost overnight Facebook seems to kind of take over the news distribution system for all these media entities, right?

I think one of the interesting divides between technology people and media people, technology people are used to building on someone else’s platform, and they’re used to the good and the bad of that. If you’re building for Windows, you know that Windows is going to change every couple of years, and you know that Windows might change in a way that hurts your product. There’s a long history with Apple and with Windows and Microsoft of technology platforms making it difficult for the people who build on those platforms. Media had never had to deal with that problem, so I don’t think there was a lot of muscle memory.

When all of the sudden more traffic came from Google and then more traffic came from Facebook, there wasn’t a lot of muscle memory inside media companies to say: “Wait a minute. That’s a double-edged sword. Now somebody else controls our traffic. Maybe it’s going to be harder to monetize that traffic. Maybe in fact we’re handing over some of the keys to our most important audience to a platform.” I think some of where we ended up would have been much more obvious to a tech company than it was to a media company, and I think a lot of what we’re dealing with right now, with media companies and newspapers in particular trying to move to subscriptions, is finally, you know, four, five, six, seven years later than it should have been.

The media company’s saying, “We really need to own that traffic ourselves; we need to build a direct relationship with our audience,” and that was a real different learning that I think media companies had to go through over the last 10 years.

Facebook And The Distribution Of News

…You weren’t there at the time, but you were there soon after. But was there a realization inside companies like Facebook as to what the responsibilities would be of becoming the main distributor of news?

I don’t think there was a lot of thinking about that, that idea.

I don’t think there was any thought that this media content or the news content in particular had more value or had more need for protection than any of the other pieces of content on Facebook.

Is that naive?

I don’t know that it’s naive as much as reality of today’s environment. News doesn’t feel sacrosanct the way it used to. News in the old days when I grew up in the ’60s and the ’70s was done by basically three companies in the U.S.: ABC, NBC, CBS, and then I guess also PBS and one or two others. That had a sense of gravitas to it. By the ’90s and the 2000s, with cable news and with all the different ways that people could consume news, news was just another format. You know, you had your news; you had your sports; you had your entertainment. I think there was just much less appreciation that there was anything special about news and that news had to be treated differently within the platform world than any other piece of content.

Was there even a discussion about the responsibilities of becoming the main distributor of news and information for a community of hundreds of millions or billions of people? Was that even something that was on the radar at a place like Facebook as they were becoming very quickly the main distributor of news and information?

By the time I got to Facebook in 2015, Facebook was already the largest distributor of news in most areas, in most geographies, so I can’t speak to what discussions happened to get Facebook to that place. One of the things I was asked to do as the person who ran the news product for a while at Facebook was to ask questions like that, like what is the responsibility of Facebook, and how much of a filter, how much of a signal of credibility and authority we should be adding to the News Feed?

There’s really two questions baked into that, though. There’s the “Should we do this?,” and then there’s the how we would do this. I would say that “Should we do this?” became increasingly yes between the election of 2016 and a lot of what we then found out what’s going on on the platform with fake news and a lot of the propaganda that we learned about. So the question of “should” became a pretty easy one in the 2017 timeframe. The question of “how,” I think it’s still an open question. … I would run focus groups with large newspapers and broadcast news journalists to ask them the question: “How do we decide what is good and what is bad? What is high integrity? What is low integrity? Or [what is] high credibility and low credibility?” It was a very interesting meeting of the minds between the technology world and the media world, because I had to take the answer to that question and turn it into technology. And they would look at me, and they’d say, “Well, we’re all serious news.”

And I would say great, we can agree on that, and I think we can agree that there is an extreme of a disinformation and propaganda that is bad. But where in the middle do you draw the line between propaganda and stuff that’s bad and stuff that is maybe analysis, stuff that is perfectly OK in a normal media environment?

Or even opinion.

Or even opinion that probably deserves a voice and deserves distribution, but maybe need[s] more context so that it’s understood not to be factual reporting but it’s meant to be opinion or analysis. I would say even within the journalism community, there’s a lot of question about what is the difference between, for instance, news and analysis. I had a discussion with a number of large newspapers where I talked about the idea of tagging news posts as either news or opinion or analysis, and I had some large newspapers that said great, that would really help to differentiate the two. I had others that said there is no difference; every piece of news is analysis, every piece of news is opinion, and every piece of news has reporting in it. So I realized that even something as simple as adding a tag in Facebook’s News Feed that says this is news or this is opinion would be very contentious within the news community.

Problems With News

…When was that? Give me the timeline as to when you started really kind of thinking through the news issues.

Yeah. We started thinking about the news issues in early 2016, and we constituted a group, and we were given a sort of approval by Mark [Zuckerberg] to go forward in the May- June 2016 timeframe.

And what was the impetus for that?

The impetus was this understanding that news was becoming a very important part of Facebook and that maybe it was a little different. One of the things-I don’t want to claim any specific credit, but one of the things that I did as part of putting together a plan to talk about building a news group was trying to bring some of the historical perspective: the fact that news had historically had a much different role; that news in some ways was the public good; that news was the thing used to justify your FCC license and that you pointed at as the thing you maybe lost money at but that you did because it was the right thing to do, and that it might be time for companies like Facebook and other social media sites to start thinking about that kind of responsibility.

In those early days, clearly there was a shift of thinking where maybe we should start thinking about it. But what was your sense when you walked in the building or when you developed that group? What was your sense of the thinking that was already entrenched at Facebook about its responsibility for news?

…When I got to Facebook, media content was thought [of] more internally as public content. That was actually the name of the group that I joined-public content as differentiated from the private content that you would share between friends and family. I don’t think Facebook spent a lot of time thinking about the specifics of any kind of media content as differentiated from the types of friends-and-family content that really was its bread and butter the whole way through.

I think it was only in the 2016-2017 timeframe that the issues [arose] of what types of content are different, what types of content do we need to think more seriously about and potentially bring in partners from the publishing industry to help us think through those issues. That was really a later process more than it was something that had been built into Facebook from the beginning.

Are you basically saying that there wasn’t much thinking about what the responsibility of news distribution was?

I don’t think Facebook realized how important it was to the news industry until the news industry told it how much it was driving its traffic, driving its revenue model, and in some ways driving how people perceived the news industry and news itself, which really happened in the post-2016 election time period.

That seems late. To be honest, it just seems like that’s kind of surprising that there hadn’t been much thought given to it until then. Were you surprised by that at all?

I was surprised by a lot of things when I joined Facebook. As someone who grew up in the media world, I expected there to be more of a sense for how media-I expected there to be more of a sense of how people interact with media and how important it can be to certain people’s information diet. I don’t think Facebook really realized how much they were part of an ecosystem that was an important ecosystem to many of those of us outside that Facebook world when we joined. When I joined-

Facebook disrupted the distribution of news and information, and basically it seems like you’re saying that they didn’t really think about what that meant, what the responsibility for that was, even for the health or information of its community, of our democracy, of what people know and think and believe.

I think many of the people who run Facebook are young enough that their worldview of news really came from the Fox News and the MSNBCs of this world, where news was not something with the capital N that had a very serious and singular point of view, but rather it was a partisan thing that could be used to whatever ends you want. So I don’t think people who grew up in that environment really have the same understanding that someone who’s a little older like myself about how news has historically been perceived.

I don’t attribute where Facebook was with news when I joined as something that is specific to any tech divide or specific to any naivité really. I think that the media environment has changed a lot, and if you’re in your mid-30s right now, you really grew up with news as an argument, with news as a cross-line discussion between two people who disagree, and the more they disagree, the better the news and the better the media.

I think a world where all of a sudden, all news is just someone’s point of view and where the more we argue and the more we fight about that news the more successful we are, the more money we make, it becomes very easy to not understand how news can have any more importance than any other piece of content.

When you come in there and you’re charged with thinking through what news is, what were the sorts of conversations you’d have with people that were of the mindset that news wasn’t necessarily something sacred and that hadn’t really given much thought to the idea that Facebook may be responsible for informing large swaths of the globe?

A lot of what I did at Facebook as one of the older people in most of the rooms I was in was try to hearken back to the news and to the media environment of the ’60s, ’70s, ’80s that I was growing up in and really trying to bring forth that understanding that historically, news has had a special role that in some ways offset some of the more prurient and other sides of media. You can be a media company that creates a lot of quote, unquote “wasteland content,” in someone’s words, But if you also had a news division and you also spent a lot of money to invest in getting the story and getting the story right, that bought you credibility. I think one of the things that we try to do at Facebook, especially post-election, was see news as a way to re-establish the fact that this mechanism that had been built called “News Feed” could also have a role in trying to really up-level the dialogue, to try to differentiate between credible content and not credible content, and that if part of the fundamental function of Facebook is to inform the world, that informing can be telling you what your friends are doing; it can be what your family is doing; but it can also tell you what’s happening in Syria right now. It can also tell you how to make better decisions about your elected officials. I think a lot of the message that I tried to bring to Facebook and certainly that’s been resonating all over the place today is that there is an important role that a company like Facebook can provide to differentiate between the serious news and the less serious news, the factual stuff and the opinion stuff, and try as best as is possible to make those distinctions.

What it sounds like more than a West Coast-East Coast divide-what it sounds like is that there is almost a generational divide here in terms of an appreciation for what journalism is and what news actually is.

I think there’s very much a generation divide. I think the media world that you grew up in today, whether you’re a millennial, or whether you’re in this next generation where everybody has iPads, is so different from what any of us who are slightly older grew up in. It’s really hard to understand that difference between, you know, in my world what used to come on the doorstep every day and what is friends and family just talking about things.

I think people today, especially the types of people who are senior executives at Facebook, grew up in an environment where content is everywhere and where everybody is a content creator. Those are good things. Those are not things that we should shy away from, but they bring a whole new set of issues that I think we’re just starting to grapple with as a people.

Trending Topics

So let’s go through some of the things that led to wanting to create news, thinking about news inside, right? So in terms of-there was the Gizmodo event, right, in the summer of 2016, which is basically with the Trending topics. So Trending topics. This was-

I think it was March of ’16, I think.

OK. It may have been. And so, are you able to kind of bring me through that-May.… Are you able to bring me through what happened there?

Sure. Depending on what’s the interesting part.

Well, I think the interesting part is leading to the fact that here you’d had some human curation, right, of Trending topics and then accusations of bias. Then you have basically taking humans out of the equation and putting the algorithm in charge, yeah.

I think what happened in early ‘16 with the Gizmodo article and the Trending topics was in some ways a difference of understanding of what the roles of curators are. What Facebook internally thought curation was in that case was deciding what’s a story and not a story. When you have algorithms trying to figure out what is news, what is trending, you have false positives. You have things that are not really trending stories but that hit a lot of the same things the algorithms are looking for and so make them trending stories.

So Facebook at that point saw its role as a curator as really saying, “This is something that is important and trending and news,” all of the above, whereas I think the way the Gizmodo article interpreted a lot of what Facebook doing was deciding what kinds of news stories deserved to hit Trending, which really was not the intent and was not in any way the way the system was designed.

When we talk about ways that some words can have two meanings, things like “curation” to a technologist just means is this a news story that is trending or that should be trending, whereas I think when I talk to my East Coast friends, for instance, “curation”: Is this important? Is this relevant? Is this something that fits within a certain editorial point of view?, which was not in any way the intent but was how a lot of the decisions that were being made were perceived from the outside.

At the time Facebook realized it was easier to just move much more quickly toward algorithmic curation again, for lack of a better way to put it, with no humans in the middle of it.

Was there an understanding inside of Facebook that an algorithm making decisions is akin to making editorial decisions?

It’s a very hard question to answer, because every one of those words like “curation decisions” and “editorial” have different meanings to people. The idea that you’re designing an algorithm that gives you more of what you want to an engineer can look like an engagement decision. If you read this and if you commented on it and that’s what we’re optimizing for, we’ll give you more of things that you read and things that you engage in.

In some ways, that’s what a newspaper editor does, too. They don’t want to write stories about things that no one wants to read until it’s so important that you have to write that story, and there’s certainly a history of editors forcing the story by just hitting it over and over again. I think answering questions like how editorial decisions are made by algorithms are so nuanced and difficult that it’s hard to unpack without really getting semantic about what those words mean.

I don’t think anybody at Facebook designing the algorithms, certainly in the earlier days of Facebook, would ever have used the words “editorial decision.” It was an engagement metric. It was optimizing for that metric and seeing how to predict what types of things people would want. I think later on there was more of an understanding that that might be an editorial decision, although a reticence to maybe use those words.

Drawbacks Of The Engagement Algorithm

In that realization, is there a dawning understanding of the potential consequences of constantly optimizing for engagement, what that might do to the public sphere?

I think certainly after the election of 2016 and as more of the fake news and propaganda types of hacks, which is essentially what those were that were performed on the engagement algorithm of Facebook, there was more of an understanding that only optimizing for engagement or predominantly optimizing for engagement had its downsides. That doesn’t mean there are easy answers as to what you do next, because starting to think about things like quality or informedness and how you can move away from engagement toward other metrics that might have more seriousness to them and heft and gravitas to them only begs a million more questions as to how you look at that, how you calculate that, and how you design the algorithm for it.

How do you do that?

There’s no easy answer. Certainly in my time at Facebook, we attacked the problem in a couple of different ways. We looked at simply asking people who they trusted and what types of news properties or news publishers felt more trustworthy. You could look at the data and try to understand things like breadth, so what types of news properties are read by people on both sides of the spectrum, so may be more balanced. You can look at just hiring people. We didn’t do that in my time. It’s possible that’s going on right now, I don’t know, but that doesn’t scale well, you know.

You start getting into language issues and culture issues and all sorts of issues, so there’s no answer to that question. There’s just a set of processes that you have to go through to try to figure out how to get close to it. The thing about Facebook, Facebook is really good at once it understands the problems, outcome or goal. Facebook is really good at iterating like crazy to get to that goal. At the time when I was there, and I left in late ’17, we had spent about a year or so by that point trying to understand questions of, like, what is the most informative piece of content, or what is the most trustworthy or credible piece of content? I know they are continuing to do a lot of that work and probably have invested 10 times as much as when I was there. I believe Facebook will figure this out. This is what they’re good at. I think if anything, the question was just, “Should we have looked at that sooner?,” and clearly with the benefit of hindsight, of course we should have. But there’s no easy answer.

Fake News

Just going back to the timeline a little bit. What were some of the seminal moments of realizing that fake news was a problem? When did fake news come on your radar screen?

After the election, it became increasingly clear that, to us anyway, sitting in Silicon Valley, that some of the content that had come through our system during the election was not only fake but was designed to specifically propagandize or specifically create misinformation. We’ve had satire for a long time; we’ve had jokes.

We’ve had certainly plenty of instances over the years on Facebook or other social platforms of content that didn’t necessarily represent itself well being very successful. And the stakes historically have been, OK, so what? I think the combination of both all of the content that was coming through Facebook discussing politics, the charged environment that we lived in certainly during the 2016 election, and then as it came out more that there might have been specific intentional misinformation and propaganda through what’s going on with potentially with the Russia stuff that it became increasingly clear to us that there was a bigger issue here. It’s really easy sitting here in 2018 to have a very different point of view. Even the day after the election, it was not obvious a lot of those aspects were happening. Like it really-this was an onion that was constantly being peeled apart to us, and we were trying to get in front of it as fast as we could.

I want to get back to Election Day. But one of the things and just some challenges here, it had been known for a long time, one, that kind of misinformation and disinformation could spread on Facebook. People were talking about anti-vaccination campaigns, and people were talking about propaganda abroad spreading on social media platforms like Facebook, in part taking advantage of the algorithms. It had also been known for a while that fake news could spread. Why weren’t these things [sounding] alarm bells inside of Facebook before the election?

I think it’s a very dangerous question to ask how Facebook could do a better job of preventing misinformation. Let’s take anti-vax, which you just brought up. Should anti-vax as a movement or as a sort of idea not exist? Is it Facebook’s role to shut it down? Is it Facebook’s role to start looking at things that are being discussed and fact-check sentences that might not have scientific basis on it?

I’m not sure that’s an easy or, or important role for Facebook to take as central to it. Or maybe let me say it a different way, which is I think if in 2015, if Facebook had said, “We’re now not going to let fake news or we’re not going to now let things that are not commonly accepted to be current thinking.” If we’re not going to let them through our platform, there would be a lot of pushback from everybody on both sides. One of the central tenets of free speech is that we fight for each other’s ability to say things we completely disagree with. It’s unclear how Facebook can draw a simple line or even a complex line between things that are fake and things that are not currently accepted and things that are agreed conventional wisdom.

But what if Facebook’s algorithm is amplifying things that are fake or tendentious more than things that are true or factual?

What if. I mean-

Meaning that it’s one thing to say no one-I don’t know if anyone is advocating for the fact that you can’t say that vaccines are good, bad, whatever, but I’m just-the question seems to be more one of, is the platform designed to optimize engagement, and is fake content more engaging sometimes? Are hoaxes engaging? Is polarizing content more engaging? And what does the whole system actually engender?

I spent most of my career in media. I think it is safe to say if there’s one lesson that’s core to media it’s that the more contentious, the more point–of-view-centric, the more argumentative, the better the media. The reality is we live in a world where cable news has turned into what it is because it drives ratings. Fox News is the largest generator of profit because it’s really good at what it does, whether you believe it or don’t believe it, whether you agree with it or disagree with it. It’s important in a sort of example of how media oftentimes, across any kind of platform, how media tends to go toward the extreme voices. Facebook is no different.

And yes, there’s aspects of Facebook that absolutely magnify things that are controversial. I’m not sure at the core of that there’s anything wrong with that. We should be engaging in stuff that we’re interested in, and we at Facebook as a platform should-I’m trying to think of how to say this. I think it’s very difficult to separate what the media does well, which is provoke, which is argue, which is disagree-

Or inform.

Or inform, or and inform in a way that lets you as a platform build algorithms to decide when that’s gone too far or when something that is provoking is actually just lying or propagandizing. There’s no easy lines for this. Historically, we’ve just given that role to a human. They’re called an editor, or they’re called a publisher, and obviously at the scale Facebook is operating on, it’s hard to put humans in front of everything, although it is now trying to do that wherever possible.

But it’s a very fine line between allowing someone to say something that we all disagree with but that deserves to be said and going to that next step and saying, “No, this actually doesn’t deserve to be said.” I think had Facebook spent a lot more time earlier on trying to prevent these unpopular things from being discussed, it would have been just as much of a problem for everybody on the opposite side, which is saying that this vibrant new way for everybody to have a voice, for everybody to be able to connect to any piece of content, is being choked by the fact that some people in Menlo Park are deciding what’s good and what’s bad or what’s something you should see and what you shouldn’t see.

Is Facebook Too Powerful?

…Doesn’t that lead them to the bigger question as to whether there is something fundamentally problematic with one platform having this much power over the distribution of news and information globally?

I don’t worry so much about one platform having more or less power because the platforms ultimately at their core are open. When I grew up in the ’60s and ’70s, there was one or two large newspapers in every market. There was three or four TV stations. There was no national cable news. There was very few voices, and there was very few ways if you had an alternative voice or alternative point of view to get heard and certainly to interact with people on the global scale that companies like Facebook have allowed. I think in any new technology like Facebook, you have to ask about the good and the bad. I personally believe the good far outweighs the bad. That doesn’t mean we can’t work to mitigate or ameliorate some of the bad that we saw.

But I still come from a place where the technology that enables everybody to read any newspaper in the world, to talk to any other person in the world and to talk to a large number of people at once are at their core better tools for democracy and better tools for communication than the world that I grew up in 30, 40 years ago.

Facebook And The Distribution Of News

Were you involved in the decision to get rid of the human curators after that? You were?

I was not.

Oh, you weren’t, OK.

In some ways that was what led to them asking me to take on news. That was more of a-that was more of asystems ops kind of-like what we call Community Ops role that was really about preventing-the original Trending feature was more about preventing bad things that shouldn’t have trended from showing up, and by that I mean non-news stories. I think with the Gizmodo article and with this idea that we had to think more seriously about our role, [we] came to this idea that let’s actually build a news group whose intent is to think about these issues. That was when I first started to get involved.

Also, your meeting with Zuck was when you were first charged with this?

Yeah.

OK.

Well, I mean, I had a couple of meetings, obviously, but I-

Are you able to kind of bring us in the room to some degree to whatever extent you can, Andrew.

Not so much that I can’t bring you into the room as it’s not really interesting. It’s not like a Ben Bradlee- fighting-with whatever kind of meeting. It’s a very perfunctory meeting. You know, we’re asking for headcount; we’re asking for certain types of product development approvals that are not at the level of anybody in this audience is going to really care about it.

But what about that thinking-what did you glean to be Mark Zuckerberg’s thinking when you were put in charge of thinking through news?

I came as a startup guy to Facebook. I was working at a company that before I joined Facebook had three people in it. When I joined Facebook, clearly the scale was much larger. I think the thing that was most interesting to me from my presentation to Mark about the news group was how quickly I got 60 heads, 60 more people, and that was just the initial mid-cycle ask. This was a meeting that happened not during the normal planning process, which is when you would normally ask for headcount growth. So we had a decent plan that said, “Here are some places we should start to invest them, and we need people,” and we were given 60 people immediately.

Once we started or once the company started to think seriously about news, there was a lot of opening of the resources, which to me was bigger than any headcount ask I’d ever had, bigger than most of the companies that I ever worked for. I think the thing that was most interesting to me about that meeting was simply that when Facebook decides to do something, it can put a lot of resources to bear very quickly.

First of all, what was the plan that you presented, and were there any kind of substantive questions from Mark or others about what your plans were?

The most important part of any product at Facebook is what you’re optimizing for. We’re building products that have to work at global scale, have to work in multiple languages across most of the geographies of this world. Saying something like, “We should invest in news,” isn’t an answerable question. We have to optimize for a certain thing to invest in news so we know what success looks like. So the most interesting discussion that I think we didn’t have an answer for but that we had was if you’re going to build a news product and if part of the role of your news product is to inform your audience, which is what we suggested we should be doing, how do you define informing your audience? What does informing mean? And informing your audience in the news context can mean talking to them about Syria or telling you about their local politicians.

Or it could potentially mean not disinforming them.

Or not disinforming them, but informing your audience in a broad Facebook sense can also mean telling them that your sister landed successfully in New York City on that trip she’s on or that your friend just had a baby or that your other friend just got a new dog. Having to build a site or a service that tried to ask our users if we were informing them but to really narrowly focus that informing on things like world news or local news was something that-we had an interesting discussion with Mark but no answers. I’m not even sure to this day there’s a good answer to that, especially on the local news side. Telling me there’s a traffic jam that’s going to ruin my commute is news. So is telling me what bombing just happened in Syria. How do you differentiate the two, and how do you decide which gets more important? It’s really difficult. If you’re leaving work, that accident on your way home is much more important than whether there’s a bombing going on somewhere.

Disinformation And Misinformation

Were you’re dealing with also the disinformation or the misinformation problem inside of these discussions, right, or inside the plan?

In the original plan that we first brought to Mark to get approval to start building a news group, we did not spend a lot of the time on misinformation. We tend to build products-or certainly during my time at Facebook, we tended to build products optimistically-assuming good-use cases and assuming that people were not going to be gaming the system. There was a Community Ops in other parts of Facebook that had to worry about some of those negative issues, so a lot of my focus in the initial news plan was how can we do a better job of informing. In particular, we spent a lot of time talking about local news and the role that Facebook, somewhat uniquely could do in the local environment. We did not spend any time in my initial plan talking about things like misinformation.

Was that actually a term that was used inside Facebook: thinking things through optimistically or coming up with optimistic plans?

No, I don’t think certainly at the time that was something that I ever heard. We just built products based on trying to build some interesting use case that we had found, in my case this idea of getting people more informed through news. My sense of Facebook, and I’ve been gone for five months now is that Facebook is today spending a lot more time thinking about some of the negative use cases and ways that systems can be gamed as part of the initial process, a lot more than certainly in my area than I did.

What was it like as kind of, either news reports or other types of-what was it like when news reports started to come out about fake news? I’m thinking of Craig Silverman and BuzzFeed, about Macedonian teenagers and a troll farm there spreading fake news and the proliferation and how was it that journalists and researchers were seeing these phenomena on Facebook, but Facebook itself seems to not have seen it?

I think there’s a couple of different ways that the system was being gamed in that 2016 period before the election. The Macedonian teens building their fake news sites was really about monetization, at least where I saw it. It was really about getting people to share fake stories that would lead ultimately to what’s called an ad farm, which is a page with a lot of ads on it that people will then click and make money. …I would say our first real inkling of misinformation had a lot more to do with financial chicanery-ad farms and the Macedonia teen example that went around a lot was really about using fake news to game the distribution of Facebook ultimately to drive people to an ad farm, which is a site that has a lot of ads on it to try to make money. We certainly in the news group were worried about that, but in the same way that we were worried about that kind of financial goings on all throughout Facebook’s history. That’s a problem. There’s always been these kinds of things; you solve it. It was really later on in the process, I think well into 2017 before some of the malintent kinds of propaganda behavior specifically as has to do with Russia came out. I think that was a much different process, certainly for me personally to think about versus the idea of just helping some people to make money they didn’t deserve and trying to prevent that.

Was there an awareness about the proliferation of hyperpartisan news sites on Facebook and the really divisive content that was going viral on the platform, and was there any concern about the fact that this whole kind of ecosystem had developed on Facebook of hyperpartisanship?

There was certainly an awareness that hyperpartisan content was increasingly being successful on the platform, but I’m not sure we saw that as doing anything other than mirroring the media environment that we lived in. You could turn on cable news at the same time to see just as hyperpartisan content. So I don’t think there was an appreciation necessarily that there was anything special about the type of content that was going on Facebook versus the rest of the media world, and quite frankly, Facebook and technology platforms are great at the very small-market content or the very targeted content, whether it’s hyperpartisan in the case of politics or whether it’s about a very obscure craft project that doesn’t ever get to the scale where you could write a magazine or a book about it. That’s what social media and that’s what the internet and blogging have always done really well.

But someone who works at Fox News may have occasionally a night where he or she wonders, “Am I contributing to a larger societal problem of hyperpartisanship?”

If they only have that, then we have a bigger problem.

Right. But in the same vein, were you asking or was anyone thinking about, ”Hey, is Facebook actually amplifying an atmosphere of hyperpartisanship? That the algorithm amplifies things that are incendiary or emotional or extreme?” And during the election cycle, not after, but during the election cycle, was anyone kind of saying, “Hey, what, you know – are we contributing to a fracturing of society as opposed to informing society or doing something else?”

I think it’s important to make a distinction that’s harder to understand on the outside between news and News Feed. I ran the news and did a lot of the news product stuff. News Feed makes more of the decisions or in some ways all of the decisions about what types of content works and doesn’t work; what are we optimizing for, whether engagement or polarization? There [were] definitely people inside the News Feed that were thinking about questions of polarization well before the 2016 elections, because that’s always been something that has been something to think about-not polarization in the political sense, just polarization in the argumentative sense. If people are using our platform to just yell at each other, it’s not a good user experience, and you want to fix that.

I can’t speak specifically to how much of that was going on when at Facebook during the election cycle, because again, it was a News Feed thing, and I was on the news. I would say certainly post-election, it was thought a lot more not only in the polarization sense in general but in the specific news sense. So we started to look at things like what types of news properties spoke across the aisle, so to speak, that seemed to have what we were thinking of as common ground among both people on the left and people on the right. But I don’t think that was as explicit an area that I focused on really until after the election.

Facebook And Filter Bubbles

What about the term “filter bubbles“? Were people kind of concerned about the exacerbation of filter bubbles; that by giving people what it is that they want, you were in some way creating echo chambers among your users?

I think everybody in media both thinks about filter bubbles in some way or another and doesn’t have that much ability to control it. The fact is your cable system could turn the dial to a different channel. If you’re a Fox viewer, it could wake you up every morning with MSNBC, and all you would do is turn it back to Fox. And so we certainly had similar kinds of analysis that said we can show somebody on the left more content that would be perceived as conservative. They would just read right through it and not click on it, not engage with it, not do any of the things that are successful for optimizing the algorithm. I think there was a basic understanding that you can move people a little bit here and there, but that truly trying to break someone out of their bubble-the problem wasn’t the algorithm; the problem wasn’t Facebook. The problem was people don’t want to read content they don’t want to read.

But when you have a product that is specifically designed to keep you engaged, are you not actually kind of exploiting that tendency of human beings to just want what they want and stay in their tribe or stay in their filter bubble?

But don’t all media properties have that? When The New York Times hired Bret Stephens, who’s a conservative reporter or conservative writer, they lost a lot of subscribers, at least anecdotally, from the things that they saw in the letters to the editor that they themselves published. I think it’s easy to say, could Facebook have done more or less to exacerbate filter bubbles? But I think everybody in the media world, everybody who builds a product, has to ultimately ask the question of at what point do you succeed by pleasing your users, and at what point do you succeed by punishing or giving them something they didn’t necessarily want? That’s maybe exacerbated in some ways by how Facebook works, but it’s really a universal requirement of building a product.

What was the mood in the lead-up to the election inside from your perspective, either your mood or the kind of mood inside the company as it pertains to news and information and how things were going?

The 2016 election cycle was good for the news industry, across the board both anecdotally and in all the data that we saw. News was being read more; news was being discussed more; newspapers were getting more subscribers; people were watching more cable TV. From a media perspective, without any idea of informing or not informing, just from simply a media consumption perspective, the 2016 election was great for media. In fact, the post-2016 election was even better for media, because you had a whole set of people who were very unhappy and a whole set of people who were thrilled, and they both consumed more media.

It’s safe to say that during the 2016 run-up to the election, we saw a lot of our numbers growing like crazy, as did the rest of the media and the news world in particular. And so as a product designer, when you see your products being used more, you’re happy.

You hit your goals sooner, and everybody is certainly happy. Speaking personally as someone who is more liberal than conservative, we were happy because we thought that our guy or our woman in that case was going to win. And so it was-it was not something that I think most of us would have understood to be anything other than another election with a really great story to be told, and a lot of people wanted to hear that story, and a time when news was really coming into its own on Facebook.

Disinformation And Misinformation

The Wired article has this quote from someone unnamed saying that there was some sort of realization that there was a disease on the platform, right; that fake news, misinformation, disinformation-not the Russians at that point, I take it. But was there a sense that there-was there a sense before the election that the platform was in some way diseased?

Not anything I saw. I think in the specific pre-election period, the thing that was most being talked about was the Macedonian teens, which was a monetization problem, was a hack on our monetization. I think Facebook had seen that happen before in many different ways with many different types of actors to many different types of outcomes, so that was seen as a normal thing we had to fight. Any kind of platform, it’s an arms race. With e-mail in the ’90s, it was spam. You figure out how to block spam, and the spammers get better, and it goes back and forth. And in that case for 20 years.

Certainly when I was at Facebook during that period thinking about things like Macedonian teens creating fake news stories, that was seen in that same light. You know, people read The Onion on Facebook and think it’s true. That’s in some ways just the same kind of issue. Obviously in other ways, the intention is very different. It’s specifically to be satirical, not to be fake. But these kinds of things happen every day. This is just the natural output of being a large platform.

So “intentional disinformation campaigns” was not part of the vocabulary before the election?

Not for me. Facebook already was a big company at that point-you know, 15,000 people-and there is a big Community Ops group that absolutely was thinking about that to some degree. I wasn’t part of those discussions, so I wouldn’t in any way want to say that there’s nobody thinking about that. Those of us building news product, which was my group, were not spending a lot of time with that aspect of it.

Fake News

…Mark Zuckerberg is asked right after the election whether he thinks that fake news played a role on Facebook during the election, and he basically says that’s kind of crazy to think that it had any appreciable impact. What did you think of that at the time?

I think a lot of us internally felt that that was not the right word choice, and I think in particular it’s easy to look at cause and effect in multiple ways. I don’t think any of us, Mark included, appreciated how much of an effect we might have had, and I don’t even know today, two years later, or almost two years later, that we really understand how much of a true effect we had versus [then-FBI Director James] Comey releasing the investigation about the emails or any of the other sundry things that happened around that period.

There was absolutely naivety, at least in my mind, in what Mark said, and I think Mark would say the same thing right now. But I think more importantly, we all didn’t have the information to be saying things like that at the time. Again, I won’t speak for Mark, but my guess is that Mark now realizes that there was a lot more to this story than he or any of us could have imagined at that point.

Did that response speak to something more endemic at the company of a lack of critical thinking or about the role in society about the problems on the platform, about blindness to potential issues?…

I think it was very easy for all of us sitting in Menlo Park to not necessarily understand how valuable Facebook had become to a whole segment of people who were looking for information and not getting it from other sources that they would normally have gotten from and who instead turned to Facebook…

…There was an article that comes out soon after the election that claims that some of the most viral articles on Facebook in the months preceding the election were fake and got a lot more play than the top articles from legitimate news organizations. What was the response inside the company to that revelation?

I don’t think it’s easy from the outside to really understand what’s going on on Facebook. Typically when articles like that BuzzFeed article come out, we discount them. They’re interesting as an anecdote; they’re interesting as a sort of signal as to how the press is perceiving us and how people are talking about Facebook. But the data itself is almost always wrong and is almost always in service of a larger message rather than an actual analysis.

When it came to that specific BuzzFeed piece at least in the meetings that I was in, it was more trying to understand how Facebook’s role was being perceived and how this idea of fake news at the time, which was relatively new still to us, was becoming more and more of a story than anything about the data that he used to create that story, which we discounted. And I still, when I was there, did not say anything that necessarily agreed with what he said. I think you also always have to be careful about looking at the wrong metric in terms of how people consume content on Facebook. I think the mistake Facebook made, too, where we talk about a small number of ads or small number of stories that maybe only get a certain amount of distribution but then go viral, and at what point are they actually being read, and at what point are they actually being engaged in, and at what point do they actually become problematic is not always obvious.

So the numbers that a BuzzFeed would look at and the things they can count easily like likes and reshares aren’t always indicative of the distribution or the true audience for a post.

So you think that that article was wrong.

I don’t think the article’s ultimate thesis, certainly with the benefit of hindsight, was wrong, which is to say there was a lot of fake news being distributed. I think trying to actually say this percent or this budget of people’s daily viewing was fake, I think that was wrong.

That article tried to expose the weird incentives on Facebook of the virality of fake news over the play of real news and that it is an ecosystem or a news environment that will, if gamed properly, will spread fake news. So what is it like hearing that inside of Facebook and kind of grappling with that at that moment?

It’s no secret that Facebook is designed, and the News Feed in particular is designed to distribute content that goes viral. In fact, one of the ironies of BuzzFeed writing that story at the time was that BuzzFeed is probably the largest successful media company to grow from understanding that virality. …One of BuzzFeed’s initial insights was that the world was shifting from optimizing for Google to optimizing for Facebook; that it was not about being found in a universe of a large search engine but actually getting your own users and your own readers to redistribute your content, which is what virality is. So one of the ironies I think that at least I appreciated at the time was that Facebook, being accused of distributing viral content that was fake, was not a secret to anybody, the least of which the BuzzFeed people who built the company to do that.

Fact Checking On Facebook

…What goes into the thinking of creating fact checking and introducing fact checking on Facebook? Bring me through that story.

When it became clear that there was fake news being distributed on our site with very explicit intention, we realized that somebody had to start making decisions as to what is fake and what is not. We didn’t really know what we were going to do with fake news. You know, that’s a separate decision. Do you stop it? Do you stop it a little bit? Do you tag it? But the core decision to decide that fake news or fake stories was a thing we have to start grappling with immediately-so OK, so what’s fake? Having come from a journalism background, having been at Wired magazine when it built its first fact-checking group, I was very familiar with fact checking as an entity.

Luckily for us, a year before this election, a group had gotten together to create what’s now called the International Fact-Checking Network that was really a set of standards and a set of players who are going to sign up for those standards to say we have a sort of official process for deciding something’s true or false or some of the degrees in between. So when it came time for Facebook to say this is an important aspect to the distribution content that we need to start melding what people accept as true or people accept as fake, that that was a better thing to bring outsiders in, both because we just needed the data and they were good at giving us that data, but also because we were starting to realize that we couldn’t just do it ourselves. We couldn’t just build algorithms and collect data on our own; that we needed to start bringing in and working with industry.

…In our process of journalism, we go line by line in our copy and figure out is what we’re saying true line by line. How in the world can fact checkers go through articles and news stories or whatever is being shared and actually do that in any substantive way?

First of all, it’s great that you have a good fact-checking process that goes line by line. I would say that most media sites are not there anymore. I think that’s a vestige in some ways; I wish there was more of it, quite honestly. I think there’s a separate challenge, which is media isn’t consumed en masse anymore. You don’t read an entire story, you sometimes just a headline or a quote that gets pulled out, and it’s distributed separately. In terms of fact checking on Facebook, you sort of start with the headline because that is in many ways the most important thing. You actually have to look at the pictures as well, because one of the things that we found is that a lot of people don’t understand how things like pool photography work. If they see a fake story with a real picture of Trump, let’s say, sitting in the White House, a lot of people see that story and perceive there to have been a photographer sent to the White House to take that picture, and that picture itself lends credibility to that article that wasn’t there, because the picture was just copied from some other professional site.

So when we approached fact checking on Facebook, we weren’t trying to imitate or mimic the standard editorial process of saying let’s go through line by line and figure out what’s a fact and what’s not. We sort of started with how news gets consumed on Facebook, which is oftentimes the headline only and the picture only in News Feed, and say even if the rest of the story is true, if the headline is false, that’s a false story, and from there deciding how deep we could go into what is truth and what is false. It’s not easy. I don’t think it’s easy for media. It’s certainly not easy for a platform with the scale of Facebook.

Then it’s not easy, I take it, to train a machine to do it.

No.

Going back for one second, a headline during the campaign was “Pope endorses Trump,” which was not true, but it went viral on Facebook. Was it known within Facebook that that had gone viral?

I’m sure it was. I wasn’t on the News Feed Group, or I wasn’t in the News Feed group where I was looking at stories that went viral. Anecdotally, I knew that was a story that had gotten out. I didn’t necessarily know how viral it had gotten, and I certainly didn’t believe that anybody believed it.

But that would that have been a red flag inside the company that something that’s patently false was being propagated to millions of people on the platform?

I think if you ask the question that way, it would have been.

If the question is, “Should we like the fact that our platform is being used to distribute something that is patently false?,” everybody would have said no, of course not. But I think when you ask then the next question, which is the harder and the more important question, which is, “So what you do about it?,” you then very quickly get into issues of not only free speech, but to what degree is it anybody’s responsibility as a technology platform or as a distributor to start to decide when you’ve gone over the line between something that is satire or clearly false from something that may or may not be perceived by everybody to be clearly false and potentially can do damage? It’s a very difficult line to walk, because we’ve never really lived in a world where that kind of fake story can then change so many of how people think-so many ways people think. And I’m not even sure today-and I don’t want to sound too naïve or Pollyanna-but I’m not even sure today that it’s obvious that that story in particular changed anybody’s vote.

Media organizations like Frontline are responsible because of FCC rules. We could be sued for slander or defamation. We are held to account legally and by the government because we have a license of PBS to operate, and much of that is because we’re defined as a media company or we’re defined as the media. I’d like to know why Facebook has avoided describing itself as a media company and why shouldn’t there be the same rules and regulations that apply to a broadcast entity apply to Facebook. Why should there be such an exemption for a platform like Facebook?

Frontline doesn’t have to practice the journalism it practices because it’s defined as a media company. It has to practice it because it has an FCC license which comes up for review, and the FCC, which gives spectrum that is a sort of public asset, has decided that as part of having that public asset, you have a certain responsibility. Facebook, like every blogger, like every tweeter, like pretty much most of the rest of media today who does not need to get an FCC license, doesn’t have that same set of restrictions. I would argue that the idea of a media company having a certain set of rules that is distinct from let’s say a technology company really isn’t the right way to think about it and isn’t really how the rules are set up. In fact, I think too much has been argued about whether Facebook is or isn’t a media company.

If Facebook had said it was a media company 10 years ago, I’m not sure we would be any further or not along with some of these discussions, because I think that what media is and what the responsibility of media, especially in a world where we have the First Amendment and such a sort of innate desire to protect anybody’s speech including unpopular speech, including incorrect speech, means that these distinctions are sort of very difficult, so even if you talk about satire, one of the things that we learned in the post-fact-checking world where we brought fact checking into Facebook was that a lot of sites were starting to hide behind the idea of “Oh, I’m just satire.” We actually internally had this idea of obvious and unobvious satire. You’re allowed to be satirical, but it has to be obvious. You can’t hide everywhere on your website that you’re satire.

You have to have an “About” page; you have to have something that says this is satirical. These are very hard lines to draw, because at a certain point, whether you like Facebook or don’t like Facebook, it’s unclear whether you want Facebook or any company with the wide distribution platform that it’s built to be making these decisions.

Facebook And The Distribution Of News

…What were those discussions like with the media companies-with the major media companies in terms of what-did they want to return to the past to some degree? Describe it.

Certainly in jest. A lot of the large media companies would tell me they wished it was the old world where they controlled the ability to talk, and they controlled the ability to have a voice, and they had a very good process for deciding truth and falsity in a way that actually allowed less fake news to make it through. Once we got past the joking, it then became a very nuanced discussion about the fact that there are no easy answers and that the spectrum between only allowing things that are 100 percent true on one end and the other end allowing everything through, that there was a line in the middle that everybody could agree was probably the right place, but that everybody’s line was slightly different, and the tactics to canonize that firm line in code were going to be at best complex and most probably impossible.

So this is impossible? Is this an intractable problem, asking Facebook to be responsible for the distribution of true content on its platform?

I think Facebook and all other companies who distribute information have a choice between being highly restrictive and being completely open, and the more restrictive you are, the narrower the audience you serve, the less ability for people with new and interesting emergent voices to get out there, the ability for unpopular discussions to happen. Facebook was designed to live on one end of that spectrum, which is to allow as much freedom of discussion, allow as many unpopular and what might be perceived as marginal voices to have a voice. And its role was really to connect everybody. Open and connected was in the DNA and the message and mission of the company since the very beginning. I think if you look on the other end of the spectrum, you see a magazine that says we’re conservative, and we only talk about conservative issues, and we have a voice that is very specifically one or a few people’s voice about a certain political point of view.

Where you draw that line in between is never simple, and a lot of the work I was doing toward the end of my time at Facebook was really trying to figure out not so much where to draw that line but how to bring more of the decision making out into the open. You could decide you wanted to agree with this because it comes from a certain perspective or it has a certain editorial process or it is or is not fact checked and to try to as much as possible inform our audience of the basic tools of journalism and the basic rules of journalism that will allow them to decide: maybe on one end this is satire or this is fake or this is not someone who has a normal editorial process versus this is an established publication; it’s been around for 150 years and that has all of us the rigors of journalism.

And really to bring it to Frontline, someone should read Frontline and a site done by an individual person and say OK, one is a bunch of people who have been around for a long time with a brand, a reputation and a process, and one is just one person’s thought. There’s nothing wrong with both happening on our platform. We truly and very strongly believe that was critical, but we also realize that we could do a little bit more to differentiate those two.

It’s one thing to differentiate. It would be another thing to say Facebook is going to actually promote certain outlets that are established or trustworthy over others that are not necessarily established or trustworthy. And what about that? What about, you know, what about tweaking the knob so that legitimate news organizations get more reach than illegitimate ones?

Toward the end of my time at Facebook and really after I left, I know that was an effort that they continue to push on. For the longest time, Facebook really didn’t want to make those decisions at all. Now there is I know, again, just from projects that I’ve worked on and people I’ve spoken with, there’s more of a thought that someone like Frontline or the BBC or The Economist who might be more broadly accepted as having a good process should have more voice, more share of voice-this has all been announced publicly-than someone who might be either uninformed or purposely trying to misinform.

…I guess what I’m asking, though, is was there an understanding inside of Facebook that that company had essentially contributed to the demise of the business model of various news organizations that actually created a lot of the content that was being shared on Facebook that Facebook was benefiting from financially and otherwise?

I’ve been in internet business and media since 1992. The news industry I think by most people’s accounts peaked around 2000. Facebook wasn’t started until 2004. I don’t think Facebook thinks it has any more or less of a role in what’s happened in the news industry than Google, than Craigslist, which certainly went a lot of the business model of news or a whole host of other sites. Certainly as we look today, Facebook is one of the bigger companies; Google and Amazon and others are bigger. There’s no question that if you size things up today, you’d have to say Facebook is an important part of where attention is going away from things like news.

But I think it’s disingenuous to say Facebook specifically did that or did anything to sort of-anything to create the environment of news any more or less than the internet or web media or any of the other set of things that have happened over the last 20 years.

You really think so? Do you really think that? Facebook’s hosting of news content has been a seismic shift for the news industry.

News industry was chasing traffic. The news industry did not have to let their content be shared on Facebook. In the same way, if you remember in Spain, they tried to stop Google News from distributing Spanish news content, and Google said, “OK, we’ll just shut down Google News in Spain,” and all of a sudden, the industry realized that a lot of their traffic went away, and they said, “Oh, just kidding,” and reversed it. I’m not trying to, again, be naive. I think there’s absolutely an effect that Facebook had the way Google and others have had. But seeing it either as zero-sum or seeing it as something that a bunch of just naive newspaper guys saying, “Hey, we’re just trying to inform the world, and all these tech guys are screwing our business,” is a very localized way of seeing the world.

The news industry and the media industry was one of the first users of internet media. In 1994, when I was helping to start HotWired we were in large discussions with almost every major newspaper, certainly Time Warner and others who were very active early. CNN and a lot of the news, the broadcast news industry, they were all there. They were all at the table, and some of the first websites were all from them. To what degree did they not get where it was going? To what degree did they not stay in front of things or did they abdicate their own responsibility? You know, I think it was a very interesting discussion to have. But when the dialogue starts to get to these tech guys are stealing time, stealing money, stealing business model, it really is-media’s never been that way.

It’s always been changing. The color printing press changed how things were distributed back in the 1870s, and you can go right through-sorry, the rotary press in the 1870s changed how newspapers- You know, you had yellow journalism around that era. We’ve had waves of technology disrupting media, media then using that technology to grow even bigger than it was before. And you know, the old saw is that old media is not replaced by new media; it just finds a different place to be. And I think that’s really what we’re going through right now.

It’s a cliché, but the “Move Fast and Break Things” ethos. Is what happened here with the news industry, and Facebook breaks it. That Facebook moved fast and broke the news industry to some degree and is now kind of figuring out what the hell to do with that, is that apt?

I think the news industry broke itself long before Facebook broke it in that context. I think in-with the rise of cable news, news became just another commodity that could be changed, distributed, that lost a lot of the importance that built up through the ’50s, ’60s and ’70s, and all of the goodwill in the sense of public good news was doing. So I think by the time Facebook, which was founded in 2004, came around, news was already broken by nobody other than news itself. Did Facebook exacerbate or grow the ability for bad news to be distributed? Absolutely, as did Twitter, as did Google, as did the internet.

But I think to live in a world where a bunch of technologists who naively went into this void trying to break things are why we’re here is really missing the whole ecosystem that-well before Facebook was created already had done a lot to damage what it had built.

Is Facebook Too Powerful?

…Mark Zuckerberg has said he feels fundamentally uncomfortable sitting here in California in an office making content decisions for people around the world. What’s that about? Put it in terms of Mark, maybe.

Yeah. I think one of the things that-and I wouldn’t in any way want to speak for Mark, but I think one of the things that I’ve seen change, and certainly the public speaking that Mark has done, is more of an understanding of how this platform can be both used and abused and how things that seem obvious to someone programming a certain specific problem or someone trying to build a system that optimizes for a goal like engagement can have a number of attendant side effects. Either unexpected, or certainly underappreciated. So I think when Mark says things like that, he’s really putting voice to the idea that even if you spend a lot of time thinking about the negative uses of your platform, even if you don’t build products as optimistically as I think Facebook is built, and you try to spend time thinking about all the things that can go wrong, there’s only so much that you can do.

There’s only so much that once 2-plus billion people have access to it that they either, for good or for bad reasons, can subvert those. I think one of the critical things that is being discussed right now around diversity in technology is that same idea, that whether it’s a bunch of white men in Menlo Park or whether it’s just a bunch of people in Menlo Park, at a certain point, it doesn’t matter how diverse that audience is; it’s not the 2 billion people that use the product every day and that that’s an inherent limitation that you can never get around. That you have to accept. And once you accept it, it requires a different type of product-development process or different type of managing the risk of downside uses of the platform.

What it sounds like is that he’s uncomfortable with his power to some degree and the power of drawing lines for this many people on his platform. But it’s also-he built this thing, right? He built something and acquired this power, and now is starting to say, “Oh, my goodness, I’m responsible for it to some degree.” Isn’t it a little bit late for that, or isn’t it just so bizarre that we’re-isn’t he detecting how bizarre it is how much power he has at this point?

Yeah, that’s a hard question for me to answer because I don’t want to presuppose what Mark is thinking. I think if you look to the original mission of Facebook, make the world more open and connected, [it] has at its core a very strong and important thing that Mark truly believes-I believe to this day, I believe-which is that the more people talk, the more people can connect, and the more people can consume all points of view, the more people can meet and talk to all types of people from all different types of social strata, ethnic and racial groups, that that at its core is a good thing. And I think-again, I haven’t talked to him about this; I don’t know him that well-I think Mark believes that still to this day. I think what’s changed is that we have a lot more knowledge now about some of the bad things that can also come from that, that when anybody can talk to anybody, people who have bad intention or who want to try to change people’s point of view through lies and propaganda have the same platform that the people trying to use it for good intentions are, that there is a responsibility to make some of those technologies that facilitate all this communication, all this openness, to have some limits, to have some barriers. I think that’s something that probably Mark would also agree to. I think that next step, though, which is then, how do you execute against that, how do you build those products, who decides what types of messages go over the line between good, bad, indifferent, fact versus analysis, that’s where it gets really messy.

I do think Facebook has as much as any other company a good understanding of how to build those kinds of processes, how to how to work through all the issues and iterate. The thing about “Move Fast and Break Things” is that it doesn’t build in this idea of we’re always right; it builds in this idea that when we’re wrong we have to realize it and fix it quickly, because “break things” is not the goal. “Break things” is a tactic that helps you get to the goal, which is a better site that people want to use, and that sometimes you shouldn’t overthink; you should just go and do it and then figure out how it breaks and then fix it.

I think the big change that’s happened-and this happened long before fake news and long before the 2016 election-is this idea that at a certain point, you get to a scale where you can’t always be breaking things, whether because technologically 2 billion people trust you every day to be around and you can’t just be down, or whether because there are bad uses of the platform like distributing fake news. I think long before the election, Mark had already understood the limits of “Move Fast and Break Things.” I don’t think he necessarily-or I don’t even want to speak for Mark. I don’t think any of us necessarily understood that ethos of “Let’s see how it breaks,” and “Fix it” can sometimes mean that really damaging things can happen while you’re figuring out how it’s breaking and then going to try to fix it.

What was your reaction when the news of the Russian interference started to come to light?

My reaction when the Russian disinformation campaigns came to light was probably more the reaction of a human-of an American than it was a product manager. I like to live in a world where I believe that on balance, people have good intentions, that I believe that people are not trying to specifically sow discord, specifically push on the differences between us, so I was obviously unhappy to find that that was happening, full stop. I was even more unhappy to then find out it was happening on a platform that I had something to do with.

Were you curious at the time how this was not detected, how Facebook had failed to stop it? What kind of questions did you have at the time about it?

Yeah. No, I was not curious, because I knew how it was to be done. The difference between somebody trying to have an alternative or provocative point of view and somebody trying to sow discord with a very specific goal of destabilizing a government is only in their intention. The means are the same; the tactics are the same. You know, one of the things that was brought up in one of the congressional hearings early on was that they paid in rubles. I think Facebook has 60-plus currencies. Many Russians use Facebook every day for perfectly reasonable things, and they pay in rubles, so the fact of any specific aspect of how that Russian information was sowed, or at least how I understand it was sowed, were just normal uses of the platform.

The hard part is adding up all the pieces of that and understanding the intention by in some ways looking at the outcome and looking at the approach to try to figure out when someone moves from the normal ways where we should all be able to disagree and have a voice about those disagreements and the extreme ways that the platform was clearly misused during the election cycle. But I think it’s-there’s almost no difference. It’s the same thing as fake news and satire. Really, you could write the same story and on one hand make it a fake news story and on the other hand have it be satire, and I’m not sure unless you really understand the intention of the writer, you can differentiate those two.

…I’m not trying to suggest Facebook should allow everything. It shouldn’t, and it doesn’t. In fact, Facebook doesn’t allow a lot-Facebook prevents a lot less [sic] than Twitter and other platforms, so in some ways Facebook has been ahead of this issue compared to most of the rest of his platform brethren. But the point at which something becomes divisive speech versus unlawful propaganda is not easy until you understand the intent of the person creating it.

…Facebook did a very slow roll in terms of coming out with information and being completely forthcoming about what had happened on Facebook during the election. What was that like for you watching that and being there and seeing how Facebook was responding?

On one level, it’s always disappointing when the company that you work for and that you’ve invested so much of your time is clearly either behind the story or being perceived to have made really egregious mistakes. As somebody who’s been in this industry for a long time and someone who’s been a student of media for so long, I realized very quickly that the important discussions were not happening in the press. They were happening in boardrooms; they were happening in product-development meetings; they were happening in meetings with partnerships that I was part of, certainly on the news side, and that we needed to get some of the emotion out of it.

I got the emotion. I felt the emotion, which was anger, which was unhappiness, to some degree because our candidate didn’t win in my case, and in others because you don’t like to think you’re building products that were so misused. But I think fundamentally, the most important thing I was feeling at the time was thank God we’re now aware of it so we can go fix it, because I do feel huge respect for Facebook’s ability once it understands a problem to go solve that problem.

And while I certainly understood and to this day I understand how nuanced and difficult these answers are, the iteration and the development that just has to happen over and over again while you get little by little toward the nub of the problem is the thing that Facebook I think is probably as good as anybody in the world at solving.

…In the news business, we’ve been asking for a long time for Facebook to take its role seriously and think about news seriously. Why did it take until 2016 and your group to start taking it seriously?

Facebook really from the beginning has been about friends and family. Other types of content like news were really seen, in fact, in the parlance of Facebook as public content, just this other thing that we didn’t control as much of. News was not really seen as needing anything more special than that, I think really until user experience of news got bad in the 2014 and ’15 era, which was when the company-and this was before my time-decided to build Instant Articles.

Instant Articles was really the first product that said: “Hey, publishers, here’s a special way to use our system. Create content that would be native to Facebook, and it will be faster, and we’ll have a cleaner user experience.” I think at the time Facebook saw a platform technology decision, which is here’s a faster way to distribute news content, and I think the news industry saw “We’re partners now.” I think that difference wasn’t really appreciated internally as much as certainly that I understood it, because I was on the outside when Instant Articles was created, so I realized that this was something that was the industry saying: “We’re happy to get in bed with you, Facebook, but we expect more of you now. We expect a business model now; we expect more distribution.” And Facebook wasn’t really answering those questions or even thinking about those questions when it initially designed Instant Articles. A year later, which was when I was asked to start focusing on news specifically, I and a few others were saying, “No, we need to get more explicit about our support for news industry.” We can’t just say, “Hey, use our product and be happy.” We had to start thinking about their business model more. We had to start thinking about if they were going to use our Instant Articles product, they had to have a certain set of monetization, or they would just stop using it.

By the time mid-16, mid-2016, came around, enough people realized that this was something that we should support. And when I say “this,” I mean a relationship with the news industry that was more collaborative, that was more two-way, that enabled us to then invest much more heavily in the product.

Misinformation And The 2016 Election

…Last question: Again, coming back to the post-election moment, after the election there was this term “post-truth era,” “post-truth moment,” in part because of what had happened in 2016. What was part of the reckoning inside of Facebook for Mark Zuckerberg, for you, for others: what was Facebook’s contribution to creating a post-truth world?

I don’t believe we’re in a post-truth world. I think that’s a convenient way for certain people who are running the country right now to try to take some of the heat off their lies, their misinformation, their desire to have people believe the truth they want to believe. I still live in a world where truth is truth, where a fact is a fact. I think those of us at Facebook that had these discussions when I was there, would all say fundamentally the same thing.

I think one of the hard lines to walk between making the world open and connected is asking the question: “what responsibility do you have to people who don’t necessarily have the skepticism, don’t have media literacy, don’t have this understanding that because something appears in print or because it appears on the internet that it’s 100 percent true?” I think that the more interesting discussion to have right now is what can we do to lift everybody up to a place where truth is back to being a thing with a capital T, where we can all have a common set of facts and we can then argue about what we think we should do with those facts, which I think is a much better place for the platforms to be and a much better place for democracy to sit.