The Facebook Dilemma | Interview Of Monika Bickert: Facebook Head of Global Policy Management

The Facebook Dilemma | Interview Of Monika Bickert: Facebook Head of Global Policy Management
The Facebook Dilemma | Interview Of Monika Bickert: Facebook Head of Global Policy Management

Monika Bickert has served as Facebook’s head of global policy management since 2013. She originally joined Facebook in 2012 as the lead security counsel where she advised the company on data security.

This is the transcript of an interview with FRONTLINE’s James Jacoby conducted on September 6, 2018. It has been edited in parts for clarity and length.

Genocide In Myanmar

I’m curious what it’s like, honestly what it feels like, when the U.N. comes out with a report that says that Facebook played a significant role in a genocide. What’s that like for you running content policy at Facebook?

Well, this would be important to me even if I didn’t work at Facebook, given my background. My background is as a federal prosecutor, and I worked specifically in Asia and specifically on violent crimes against people in Asia, so something like that really hits home to me. But my job at Facebook is to make sure that we have a safe community, and so it’s especially important when I hear something like that. We have policies that are designed to keep our community safe, but it’s also important for us to enforce those policies, to find bad content quickly, to find violating content quickly and to remove it from the site, and we’ve been too slow to do that in some situations. That’s part of why we’re investing so much now in the technology to find more of this content and get it faster.

The people who speak the relevant languages and have the right local context to identify and remove this sort of content, and relationships – and when I say relationships, it’s more than hiring people who come from countries around the world and speak these languages. We also have to have relationships with people that are in country who are working for safety organizations or community organizations who know exactly what’s happening on the ground and what we need to know to make sure we’re finding this content and removing it.

You used the word “slow,” but Facebook was warned as early as 2015 about the potential for a really dangerous situation in Myanmar, and I’m just kind of curious what happened, what went wrong there, why was it so slow, that response to what were rather clear warnings that came from people that actually even met with Facebook as early as 2015.

We met with civil society organizations in Myanmar far before 2015. This is an area where we’ve been focused.

And with my background, having lived in Asia for more than five years and worked on safety issues there, it was an area where I personally was also very committed to making sure we’re doing the right thing. But I think what we’ve learned over time is it’s important for us to build the right technical tools that can help us find some of this content and also work with organizations on the ground in a real-time fashion. It’s one thing to talk to an organization and have them say, “Here are the general risks.” It’s another thing to have that organization saying, “Hey, we’re seeing this content; you need to be aware of it,” and that’s the kind of relationship that as we’ve grown as a company we have built up over time.

I think we were too slow to build those relationships in Myanmar. We are in the process of building those relationships around the world on a much deeper level so that we can stay ahead of any kind of situation like that.

You’re a former prosecutor; you’re a lawyer. Should there be any liability or any legal accountability for a company like Facebook when something so disastrous goes wrong on your platform?

There’s all sorts of accountability: in terms of regulation, which we face around the world and something that we take extremely seriously, and other types of laws. In terms of civil society organizations who hold us accountable every day, we maintain relationships with hundreds of these groups, and that means we’re in contact during the course of a week with a number of groups who will say, “Hey, you’re getting this wrong,” or, “Hey, here’s what we’re hearing from our constituents.”

But probably the group that holds us the most accountable are the people using the service. If it’s not a safe place for them to come and communicate, they are not going to use it. That’s something that we know, and it’s core to our business as well as being the right thing to do for us to make sure that people are safe.

Misinformation In The Philippines

Also another Asian country I’ve got to ask you about is the Philippines. There’s a problem in the Philippines. We’ve heard about it from people on the ground there that Facebook has been to some degree weaponized by the [Rodrigo] Duterte regime there; that there’s a misinformation problem, and it’s leading to real-world problems, real-world effects – people dying, people getting threatened. What are you doing to stem this problem in the Philippines?

One thing we’re trying to do, anytime we think there might be a connection between violence on the ground and online speech, the first thing for us to do is actually understand the landscape.

Partly we do that by hiring people who can work on our teams in those areas, but an essential part is actually partnering with people who work for local organizations who are local community leaders, academics, journalists, to make sure that we are staying current on the issues and responding quickly. One challenge in the Philippines has been misinformation, and anytime you think about how to deal with misinformation, there’s a fundamental question, which is, what should our role be, and as we are identifying misinformation, should we be telling people what we’re finding? Should we be removing that content? Should we be downranking that content? We now have a team that is focused on how to deal with exactly that sort of situation.

Do you think you were too slow in the Philippines?

We can always do better in these areas. The most important thing for us is to build the relationships on the ground that allow us to understand these issues and then hire the people who can review this content and make the quick decisions and the accurate decisions we need to remove it. Technology plays a role just as in Myanmar. Technology is important, and there are things that we are doing, for instance, to more accurately identify fake accounts when they are created.

We see that a disproportionate amount of bad content is created by people who are trying to hide their identity. This is particularly true when it comes to something like misinformation, so building technical tools that can help us find those accounts and remove them quickly is fundamental to safety.

Real-World Effects Of Social Media

I guess, again, on the personal level – and you lead a huge team of people, and we saw you in action today, but you lead a large group of people – does your staff understand the real gravity of what real-world effects there are of what’s happened in the virtual space that then translates into the real world? I mean, does it really dawn on people? Are people kind of upset about it on an emotional level?

Yes. And let me tell you, I’m not the only person on my team who has experience dealing with these issues in the real world. I was a federal criminal prosecutor for more than a decade. I worked on child sex trafficking and counterterrorism in Asia. When I joined the team, I didn’t come to Facebook just because I liked the company or the mission. I came because I care about these safety issues, and it was important for me to work on them. That’s true for the people I’ve hired onto my team. For instance, I have a woman who worked as a rape crisis counselor. I have a former teacher. I have Brian Fishman, who ran West Point’s counterterrorism research center [Combating Terrorism Center (CTC)] and has written a book about ISIS. I have a woman in Europe with a Ph.D in extremist organizations. The list goes on. But the reason that we hire these people is first of all, because they have a passion. They are coming to Facebook because they care about these issues, and this is a place where they can work on them. They are also important because they know the people who are around the world working on these issues with whom we have to maintain relationships. If there is an event that happens in the world with an extremist group, … the people on my team have those backgrounds to know who are the organizations, who are the people that we need to reach out to understand what we need to do in this situation.

Keeping Facebook Safe

When you look back over the past several years, has there been a shift at Facebook? Has there been a bit of a reckoning in terms of how big it is, the scale of the problems, the scale of actually being a more responsive and responsible company? What has the shift been, if any?

There’s a lot more investment, particularly in technology and in hiring content reviewers and people who have the expertise to assess this content quickly. When I took this role five years ago, we already had a set of policies in place that identified the types of content that we didn’t want on the site, and those categories are largely the same today. We don’t want harassment; we don’t want hate speech; we don’t want bullying; we don’t want terror propaganda. But where we are now, in terms of using technology to flag potential violations and hiring people who can effectively tell us what are the trends, what do we need to be looking for, how do we determine what is hate speech in Germany or Afghanistan and the language specialists to review that content and remove it, we are far beyond where I thought we would be at this point. We still have a lot of work to do.

What has it been like with Mark [Zuckerberg] as the leader of this company in terms of either internal Q&A’s or any meetings or any anecdotes you might have about the grappling that may have happened over the past couple of years with significant things, significant problems. Has there been a change you’ve seen in him? Is there anything you can point to that might be illustrative of that?

Mark has been very public about how much he cares about these issues. He’s not alone. That’s true with our entire senior leadership team. Because of that, as we’ve grown as a company, we’ve seen more and more investments in hiring the right people who can help us get better at this. I think if you five years ago talked to the engineers at Facebook and said, “What are the things that you want to work on?,” I think you would have gotten one set of answers. Now, I think because of the investments that we’re making as a company, I think engineers would say, “I want to work on these systems that are helping keep our community safe.”

We are now hiring a lot of engineers who are working on that and are making it so that when content is uploaded to Facebook, we are able to use technical tools to flag it, sometimes removing it even at the time of upload. You think about terrorism propaganda. In the first quarter of 2018, we removed 1.9 million posts for being related to terror propaganda, and of those, more than 99 percent we identified using the technical tools that our engineers have built.

Was it a point of frustration for you going back five years that here you were setting policies but that they may not have been implemented sufficiently?

Implementing these policies is difficult work, and those challenges that existed back then still exist today. One of the big ones is how do you craft a set of policies that can be applied at scale when you have millions of reports every week, and your technical tool is flagging some other content that might violate our policies, and it’s in dozens of languages, and you just don’t always have the context. Those challenges are real now just as they were real then.

But we have gotten better at dealing with them, and that’s because of the technical tools and the people that we’ve hired.

In terms of policy, you’re a lawyer. Lawyers deal in precedent, and law is all about what the precedent is and sticking to it. In some ways, it seems that your rules are almost changing by the day; that there’s constantly having to shift for something that’s happening in an area of the world; that there’s a conflict, whether it’s in Sri Lanka or it’s in Myanmar or it’s in the United States, that there’s a constant shifting of what the conflicts are and all these different languages and all these different places. Are you basically building a set of rules as you go?

We are constantly taking a look at these policies and refining them. Every couple of weeks we issue updates to our policies, and those are reflected in the standards that we publish online that anybody can see. That’s probably never going to change, and the reason for that is online speech will continue to change. The people who are online, that will continue to change, and people’s ideas about what is acceptable to say, that’s going to keep changing, too. So in the course of a given month, for instance, we may have a word that is now being used as a slur in this country; we may have a new trend against women in another country where their profile photos are being stolen and misused.

We have to stay current on these issues and give our content reviewers new guidance on a regular basis to keep the site safe.

A slur, for instance, is different, I’d imagine, than hate speech or an incitement of violence. Where are you drawing the lines in terms of language and regulating language and moderating for language, and how can you possibly do that at scale? How can you possibly do it?

Doing it to scale is challenging, especially because we don’t always have the context. So we do put very detailed rules online explaining how we define hate speech; how we define slurs and when we will remove slurs from the site; how we define bullying, for instance. When you look at something like a slur, somebody might use that word to attack somebody else, in which case we would remove it. But you can also imagine somebody saying, “Today when I was walking down the street, somebody called me this; I can’t believe people still use that word.” Well, we would want to allow that.

It’s being used to raise awareness and for a good purpose, so we have to give our reviewers as much context as possible so that they can see the language that is there and try to make a decision: Is this slur being used as an attack? That’s not always an easy call.

It would seem that you’d also need a hell of a lot more people already than you have to be able to handle this type of thing. I know that we’ve talked about this 38 percent number when it comes to hate speech. How do you even fill in the gap of what’s not being taken down or that’s already been flagged? How do you do it with the staff that you already have?

There’s two ways that content might get sent to our teams. One is if our proactive systems flag the content as maybe breeching our standards, and another is if somebody has reported the content through the site. Somebody using the site can report any piece of content to us, whether it’s a page, a profile, a photo, a post, a comment. If it is flagged by somebody on the site or technology, it comes into one of our content reviewers, and they have to make that decision about whether or not something violates our policies.

Facebook’s Rules For A Global Community

Do you ever kind of step back – I mean, it’s such a crazy role in some ways, in that you’re basically coming up with rules for a community of 2.2 billion people. That’s insanely diverse, and it’s all over the world. Are you ever uncomfortable with the role that you have in drawing these lines, and what do you say to skeptics who say this could go too far in the other direction, that this could become like the thought police or the speech police?

I think there is a misperception that I am drawing the lines by myself or my team is or the company is in a vacuum, and the reality is that we are working every day with organizations outside Facebook to get their input in actually crafting these standards and making these decisions. Every two weeks, we come together in a global meeting that has representatives from teams around the company, engineers and operations specialists, content reviewers, lawyers, communications specialists, and we talk about the issues that we’re seeing and whether or not we need to refine our policies.

A critical part of that conversation is, what are external groups telling us? Before we make decisions to refine our policies, we may reach out to several dozen groups, and we may present to them options. “Hey, we’re thinking about moving this line. If we move this line, it would mean that this sort of speech would be removed. How do you feel about that, and what are we missing?” The input that we get from those organizations is fundamental to how we draw these lines.

Still, fundamentally it’s your line to draw as Facebook, right? You may be reaching out to all these groups, but I’m just wondering if there’s ever a time when you’re uncomfortable with that sort of power.

It’s certainly a serious task to make sure that we are keeping people safe on the site, and that’s a responsibility that we take really seriously.

Help our viewers understand how gargantuan a task that is, though. We’re talking about the nuances of conflicts all over the place and essentially moderating the conversations to some degree between all of the people on this platform. It’s kind of an unfathomable task to me. I can’t imagine how that works.

The volume that we have to assess is very large. …The reality is we have more than 2 billion people regularly coming to Facebook, and if something is posted that is potentially violating our policies, we need to be able to review it quickly and accurately. We get millions of reports every week, so when my team is writing these policies, we can’t think, OK, well, let’s write this policy as if we’re going to have the time to sit around and debate this with a global group five times a week.

No, it has to be something that we can actually apply at scale to millions of reports every week. That means our guidance has to be very objective. It has to be fairly black and white if you see this is how we define an attack; this is how we define bullying. And it means that the lines are often more blunt than we would like them to be.

Again, it may be a more philosophical question, but do you ever think there might be something inherently problematic about having a private company be essentially this kind of digital public space for this many people all over the world – I mean that there’s something fundamentally problematic about that idea of there being such a predominant social network like Facebook?

Well, again, when we’re thinking about what types of speech we want to allow on Facebook, that’s something that we’re doing very much in collaboration with people in our community and groups around the world. This is certainly not something that we’re doing in a silo. But it is a challenge to think of a set of rules that can work for a global community. More than 85 percent of people using Facebook are outside the United States and Canada, so you’re talking about people with really different ideas about what sort of content should be online, and it’s also a challenge to write these rules in a way that can be implemented quickly and accurately at the scale, at the scale of millions of reports every week.

But we also know this is something that we need to do for our community. We need to keep Facebook safe, or people are not going to want to come and share and talk to one another.

But in some places around the world, like Myanmar and Philippines, it is essentially – it’s sort of the Internet in a lot of places, and there aren’t that many other choices at this point. Does that dawn on you in terms of the gravity of what that means?

Absolutely. Part of my job is going around to different communities, so I travel a lot, and my team travels a lot. We also travel with engineers and content reviewers and government relations specialists and others from Facebook, and when we’re talking to people on the ground, sometimes we’ll hear from them: “This is a fundamental part of being able to communicate in our country. Using Facebook is fundamental to being able to understand the situation, to being able to understand what’s going on in our country.” That really resonates with us.

Change At Facebook

I’m curious about the trajectory of the past couple of years in terms of, have you seen a change in Mark? Have you seen a change in the tenor of how this company operates beyond just investment? Increased investment, I get it, but in terms of a reflective nature of what it is that has been created here – obviously the good elements of it. But all of it – the scale, the size, the drive for growth, the drive into places around the world without the capabilities of necessarily dealing with all of the conflicts and issues that might arise – what’s been the reckoning here?

Mark and Sheryl [Sandberg] and our other senior leaders have always been very public about their commitment to safety, but I have seen that as the company has grown, these issues that have always been central to my career have begun to be more central to the conversations you hear around the hallways at Facebook. I came to Facebook after having spent more than a decade working on issues like child sex trafficking and terrorism. These things are real to me. I have been on the ground. I have talked to victims of crimes. I’ve worked with lots of victims of crimes, and when I see something on Facebook that is violating our terms and making somebody unsafe, that is something that strikes me very deeply.

When you now look at the people who are working on these issues at Facebook, you have teams of people who have been hired because they have the same background, because they have those same passions, and that’s something that I think has really grown throughout the company. It’s something that you feel much more now than you would have, say, five years ago.

You came as a very seasoned person with a lot of experience both abroad and here as a prosecutor, having dealt with some of the darker sides of humanity. When you walked in five years ago, it strikes me as you walk around here that there’s youth. Silicon Valley is already kind of a bubble to some degree. Did it strike you when you came here about a culture that’s youthful, that isn’t necessarily global in thinking, or hasn’t necessarily been exposed to the darker sides of humanity?

When the company hired me into this role, I think part of it was because I had this background working on safety issues in the real world. The company at the highest levels was paying attention to this and really wanted to bring in expertise, but it was a small company, and if I think back to what my team looked like at that time, it was a team of people who mostly had a background in operations work; they had been content reviewers, or they had done other jobs at the company.

That over the course of the past five years has changed significantly. The people that we’ve now brought onto the team we have brought on – this isn’t just in the past year this is over the course of the past five years – but we have brought on people who have deep experience in these areas, in part because the senior leadership of this company has recognized that that’s what we needed to build a community, to build an approach that really suited the needs of our community.

Political Advertising And Targeting

Since the 2016 election, has there been any reconsideration inside of Facebook about political advertising and microtargeting?

Definitely. If I think back to where we were in 2016 versus where we are now, we have gotten much tighter in some areas, and we also are demanding much more transparency in our advertising. So on the first – on the first prong of that, we know that content that was shared in advance of the 2016 election and even after the 2016 election, it violated our policies. We weren’t fast enough at catching it. What we’ve done since then is we’ve built technical tools that have helped us find those sorts of inauthentic accounts, fake accounts, inauthentic networks, to find them much more quickly and remove them. In advance of the French election, the German election, the election in Mexico, we were able to remove thousands of accounts that were inauthentic and shouldn’t be there.

Now there’s a second prong, and that is making sure that when people see ads that relate to political issues that they know who is behind that ad. This is something we didn’t have before the 2016 election. But now as people are running political-issue advertisements on Facebook, we are verifying their identity, and we are putting those ads in a place where people can see, even if they weren’t targeted with the ad, they can go and see on Facebook how many ads are running on this particular issue. They can see who’s behind those ads, and if you’re served an ad on Facebook and it relates to one of these issues, you can see who’s behind it.

Bringing greater transparency to advertisements is a way of letting people know who was behind the ads and whether or not they should rely on the material they’re seeing.

Should there be a stricter policy about the power of microtargeting and who can be targeted and how precise you can get with the targeting? What’s changed in terms of the thinking here about the use of the microtargeting tools that you’ve developed at Facebook?

We do have strict limits on how people can target advertisements, and we’ve learned from that over the years. For instance, there was a time where you could type in certain words to target people based on their interests and things they had expressed, and we have – and we saw abuses of that list of words, so we’ve now changed that where we review these words that people are able to use. We’ve certainly gotten tighter on advertising targeting over time.

This is also an area where it’s important for us to understand what regulators are concerned about. They can sometimes point us to issues that we need to be aware of; for instance, making sure that there isn’t discrimination in the way that ads are targeted. These are things we take really seriously, so we engage with regulators, with academics, with civil society organizations to make sure we understand their concerns and are restricting the targeting accordingly.