The Facebook Dilemma | Interview Of Guy Rosen: Facebook VP Of Product Management

The Facebook Dilemma | Interview Of Guy Rosen: Facebook VP Of Product Management
The Facebook Dilemma | Interview Of Guy Rosen: Facebook VP Of Product Management

Guy Rosen has served as the vice president of product management at Facebook since 2013. This is the transcript of an interview with Frontline’s James Jacoby conducted on September 5, 2018. It has been edited in parts for clarity and length.

Bring me back to the moment when you’re basically given a task here and where were you. Who gave you the task? Kind of bring me into the story of being charged with this enormous responsibility.

So let me step back and explain how my role works. So I’m responsible overall for the safety and security work at Facebook and I approach it from the product and the technical side. So working very closely with policy and with operations, because fighting abuse on Facebook is something that requires all of those three to constantly work in lockstep.

And a lot of the work that we’ve been doing is understanding how we move from being reactive to being proactive and how we approach the task of understanding and finding bad content. So traditionally, the way my team works is if you see something that you think shouldn’t be on Facebook, we build the tools for you to report that content to us. We build the tools for our reviewers to review that content. And increasingly, we’re investing more and more in tools that will proactively go out and find that kind of content because as we get more proactive, we can find bad content faster, which means we can get to it before anyone reports it. We can even get to it before anyone sees it. And we can just take down more bad content.

Identifying Bad Content

How far along are you in developing that tool for instance. The tool to proactively find bad content?

So the way artificial intelligence works broadly is it’s a system that learns by example. And we’re kind of fortunate, in a sense, that we have millions of reports coming in every week from our community and we have the corresponding decisions made by our review team. And so we can use those to try and train, as it’s called, train the system to try to proactively detect similar kinds of content.

And we’re seeing that in some areas it is working very well and in other areas we have been making steady progress. So for example, areas such as nudity or graphic violence, where years of research into computer vision are actually showing a lot of fruit. And for example, on nudity, 96 percent of the nudity that we remove is actually identified by our systems. Terrorist propaganda, over 99 percent of the terrorist propaganda that we take down is identified by our systems. So that’s before anyone even reports it.

The areas like hate speech, for example, are more complex because machines aren’t always that good at understanding the nuances and the context of language. And so we’re certainly more dependent on people to help us do that work, whether it’s people who report content to us, whether it’s people who review that content. So on hate speech, for example, in the last quarter of 2017, 23 percent of that content that we took down was flagged by our systems. In the first quarter of 2018, that number went up to 38 percent. So there is steady progress there. We have more work to do and we have been making progress.

And that means if only 38 percent of the content’s been taken down, that you need human beings then to deal with the rest of it?

To be clear, that means 38 percent of the content that was taken down was flagged by our systems, as opposed to the rest which was reported by users. So all of the [problematic] content was taken down. And the way we measure our progress on the artificial intelligence work is to ask: Of the things that we take down, what percentage was detected by our systems and what part was still dependent on reports coming in from users? And we know we need to keep pushing that number up because that means we are becoming more proactive and we can get ahead of things. And the more proactive we are the faster we can take down that content before anyone else reports it, and even before anyone sees it. And we can take down more bad content.

I mean it would seem rather difficult, even for a human being content moderator, to figure out what is or what is not hate speech, especially when you’re operating in so many different countries, so many different languages, so many different cultural contextual nuances at play. I mean is that a near-impossible task to expect a human being, an army of humans, to be able to do that?

One of the things we do is try to define very clear community standards. So the way we approach this is we want a community where people can express themselves and can have robust discussion. It’s really important. But we also need to balance that with keeping people safe. That means we need to draw lines somewhere, and honestly, there’s no one right place for those lines to be. But we’ve drawn the lines with what we call the community standards. And one of the things we did is actually publish earlier this year the full guidelines we use for enforcing the community standards. And if you look at them you can actually see how they describe in a very crisp way where those lines are.

So for example, in areas like nudity or graphic violence, we don’t leave that up to the judgment of an individual reviewer, but we actually specify where exactly does that line go, down to the gory details (literally gory in the case of areas like graphic violence), so that we can be consistent in how we apply those policies. And the most important thing in releasing those is to be able to have a discussion with people outside the company – with folks in academia, with NGOs [nongovernmental organizations], with human rights organizations – and to have a discussion on where should those lines be drawn.

They will continue to evolve as we learn, as we get feedback from the community. But it is really important for us to have those lines and to transparently communicate those.

Genocide In Myanmar

The situation in Myanmar – do you classify that as a hate speech situation, that in some ways Facebook has played a role in the genocide there by amplifying hate speech and allowing it to spread there? Do you see that as a hate speech problem?

The ethnic violence in Myanmar is horrifying, and we were too slow to spot how this was playing out on our platform. But we have made progress. We’ve made progress on building better reporting flows. We’ve made progress on working with organizations on the ground that can help us understand how a situation is playing out. We’ve made progress in hiring reviewers who speak the local language and understand the context. And we’ve made progress on technology to proactively identify hate speech.

So for example, last year 13 percent of the hate speech that we took down in Myanmar was detected by our systems. And as we made progress and as artificial intelligence technology advanced, we resolved that we would try to make it work in Burmese, which is the local language, despite a lot of complexity that is very unique to the way that language works. And now over 52 percent of the hate speech that we take down in Myanmar is flagged by our systems. That’s over half. And as we become more proactive in taking that stuff down, it means we can take it down faster, which means we can get to it before anyone reports it or even before anyone sees it, and we can take down more bad content.

You used the word “slow,” that it was a slow response by Facebook to this. But we’ve spoken to people that, as early as 2015, had spoken to Facebook about the potential for a genocide, a Rwanda-type situation in Myanmar. Be more explicit, if you could, about what “slow” means to you. What did that mean in terms of responding to the issue of hate speech there?

I think the situation and the violence in Myanmar is horrifying. And across the safety and security work, [Facebook founder and CEO] Mark [Zuckerberg] has said we didn’t invest enough and we’re changing that. We believe Facebook – and I firmly believe – Facebook has a lot of good that it brings into the world. But there is also abuse and there is also bad content, and it is our responsibility to minimize that bad and to maximize the good. And we’re investing. We’re going this year from 10,000 to 20,000 people that are working on safety and security.

It is a huge philosophical shift for us. People talk about the shift to mobile that the company underwent five or six years ago, which really changed how the company operates. This is a shift of a greater magnitude even. And it is harder and more challenging because there are adversaries on the other side who will try to evade and try to exploit our systems. And there will be new challenges all the time. But it is our responsibility. It’s our job, it’s literally my job, to be proactive and get ahead of it.

Facebook In The Philippines

There’s a situation in the Philippines, for instance, where President [Rodrigo] Duterte has been attacking critics, mobilizing large amounts of people on Facebook to attack critics of his and to basically threaten people. And it’s organic to Facebook. There’s hate speech, there’s memes, there’s all sorts of things. What are you doing to correct that issue in the Philippines?

In the Philippines, we need to keep pushing on people and technology. People – we have reviewers who understand the local context. We have people on the ground who can work with NGOs and with journalists to better understand those experiences that are happening through our platform so that we can build the right solutions and technologies, like proactively detecting fake accounts, proactively detecting hate speech, building tools for people to prevent harassment, and a lot of developments in our work on misinformation. We work with three fact-checking partners in the Philippines. We have Rappler; we have Verifile; we have AFP [Agence France-Presse]. And the goal is for misinformation to not spread on Facebook.

One of the things that we’ve heard from some of those third-party groups is that they’re spending a lot of time helping you guys fix this mess while they should be spending time reporting on what’s going on. Are you shifting too much of the responsibility onto third parties in places like the Philippines or Myanmar in order to fix something that should be inherently your problem?

I think safety and security broadly is something where, as an industry, we need to partner together – whether it’s one technology company, whether it’s a series of technology companies. There’s threats out there that are greater than any single platform and we need all the players involved to work together to help to understand bad actors, to identify where they’re coming from, so that we can go and proactively take action on them.

Facebook And Policing Content

One of the strange things about the philosophical shift at Facebook of taking responsibility for content is, seems to me it may now be a situation where it’s: “careful what you wish for.” If we want Facebook to take more responsibility, it means essentially creating a larger army of people that are essentially censoring content that’s out there, or censoring the community. Do you see yourself as building censorship tools to some degree?

We see ourselves as trying to balance giving people a voice and creating that place for discussion with keeping people safe. And there is no one right way to do this. But that’s why we have to draw lines somewhere and that’s why we need to be very transparent about where those lines are so that we can evolve; so that we can learn. And those manifest in our community standards where we articulate exactly where the lines are and what kind of content we take down. And we will continue to evolve and learn the right place for those things to be because there is no one right solution for this. But it has to be a balance and we have to give people a place to have, to be able to express themselves and to have conversation, while also watching for the edge cases and for the bad content, the bad actors and the bad behavior, so that we can keep people safe on the platform.

In terms of a kind of track record of being able to – the content moderation system that Facebook had in place for a long time, people reporting that content, do you think that worked well? Did that work well as a model for people to be able to flag content and get problematic content taken down?

I actually think for the foreseeable future this will be a combination of people and technology. Artificial intelligence is making a lot of progress, but it will always be limited in what it can do and we will always need people to report content and people to help review that content. And as we’ve made progress, we’ve been able to be more proactive and get ahead and take down more bad content and take it down faster. But people will continue to be part of the equation.

And what about transparency about who those people are, where they are, how they’re trained, what their backgrounds are? I mean these are the people who are doing this work when you’re hiring up. I think for a lot of people we’ve spoken to out in the world who’ve had problems with content, they’ve had difficulties locating who it is that’s actually judging this content – things that were taken down that maybe shouldn’t have been taken down; things that should have been taken down that were never taken down. What transparency are you giving in terms of who the people are that are moderating and where they are and what their qualifications are to do this work?

We employ people around the world who bring in the knowledge of languages, of cultural context, so they can better understand. But the focus that we have is on the standards, on what are the rules that they use to make these decisions. And that’s why it was really important for us to publish the community standards in their full internal guidelines so that people could actually look and understand what decisions were being made and when we make mistakes. And mistakes will be made. They can understand: Is it just a mistake that was made as part of the enforcement process, or am I disagreeing with the rules and the way that they’re written? And we welcome the continued conversation about where those lines are.

So what assurance could you give, for instance, to someone in Ukraine that his or her content that either has been taken down or was reported as problematic content, that the person who’s doing the moderating, the people that are doing it, are either Ukrainian or Russian or what’s their background? Because everyone’s going to – any individual, despite the rules you might have in place, could probably bring some bias to what it is that they’re reviewing or they could come from – and think one word means something in Ukrainian that it doesn’t mean in Russian, for instance.

We employ people who bring in a variety of the local context.

How can people be assured of that? I mean what transparency could you offer to us or to people that are affected by content – either good or bad content – that these people actually do reflect something that’s more nuanced if we don’t know who they are?

One of the things that’s really important that we’ve been rolling out this year is the ability to appeal. So if a piece of content is taken down, if a certain decision is made, people should have the ability to ask for an additional review and provide additional context. And we use those, first of all, to help to correct mistakes that may be made, but also to provide a signal that we can see where and what kind of mistakes maybe are being made so that we can continue to improve those systems. Now, more broadly we are – we have been very focused on sharing and publishing metrics that can help give a sense of the progress that is being made.

So taking a step back, one of the most important things in approaching a system of this scale has been to take a methodical approach. And the most important thing in a methodical approach is to have measurement, to have metrics, because metrics will help us understand the experiences that 2 billion people in various countries are actually having. They will help us identify where the gaps are and where we need to put more attention and focus, and they will help us measure and track the progress that we’ve made.

As we’ve developed those metrics internally to help our teams operate, last year we resolved to publish those metrics so that we could also have a conversation with the world about what is the right way to measure systems such as this. How does the industry be benchmarking these kinds of systems and how do we track the progress that is being made? In May, we published our first community standards enforcement report. There will be more. We will continue to publish those, which lays out a number of measures across different types of violations to reflect the state of how things are going and where the progress is.

And we really think that by having those numbers out and by having a conversation about what is the right way to reflect the progress, we can understand and be accountable to the world, frankly, for the kind of progress that’s being made and understand where the gaps are and where we need to do more work.

Facebook And Advertising

In terms of the ad review, that’s your purview, right, is the new tools that you’re…

Transparency and verification.

So one of the main issues in the 2016 election was malicious advertising on Facebook. What assurances are there that that’s not going to happen in the 2018 midterms?

We’ve built a lot of systems to help to provide transparency to people on ads. And so first of all, on any page you can click and you can see all of the ads that are running on that page, which gives anyone full transparency into any ads that people are actually running, even if they’re not targeted specifically at me or at any specific person.

The other very important thing is political advertising. We need to make sure that there is authenticity in political advertising. And so any ad that is political, or even around a political issue, goes through a verification process where we verify the identity of the person who is running the ad. We verify that they’re in the United States and we added a label that shows that this is an ad taken out – a disclosure label kind of like what we have on TV but actually with even more verification. And the goal is really for political ads to be something that people can trust and can understand when they see them in their News Feed.

Are people going to have any idea as to who’s been targeted by those ads?

So we have a political archive where all of the ads, including some of the details on their settings, will be available. And we’ve already seen researchers and journalists use our political ad archive to go and to look at what are the trends. So for example, even around the Supreme Court nominations, different groups took out different ads promoting different viewpoints on the candidates. And there were multiple stories in the media using our political ad archive that were reflecting on what those messages are and what groups were running which kind of directions.

In terms of the bigger picture here, in terms of what your task is, is it an impossible task? You’re talking about more than 2 billion people that are connected to a single platform. You’re talking about dozens of countries, multiple languages. Given the magnitude of that and given, frankly, a track record of this company and its responsibilities, I’m skeptical that that could work in some way. What do you have to say to me – what do you have to say to a skeptic who says this just sounds like an impossible task?

We clearly have a lot of work ahead of us. But when we have put our minds to things, then we have been able to turn the tide. And we have been able to move in the case of safety and security to being proactive, to getting ahead of threats, to taking down bad actors, to finding more bad content.

And this is a huge investment. So last year, for example – every year, Facebook goes through a planning process where every team across the company lays out a number of plans and ultimately Mark decides which teams get which kind of resources. Last year, as we were just starting this process, Mark sent me an email and he said, “Before I even start it with anyone else, how much do you need?” And to me that made crystal clear that this is the most important thing for us as a company. And as a result, we’re growing this year from 10,000 to 20,000 people working on safety and security and we’re making progress in a number of areas.

We have a lot of work ahead of us but there’s a lot of progress that’s been made and there will be challenges. But it is our responsibility and it’s my job to be ready for them and to get ahead of them.

So what is your biggest challenge?

There is a lot of-sorry let me reframe that. We have a lot of work ahead of us. And –

Like what, for instance? What work do you have ahead? And how quickly can you do it?

This is a – this is the kind of work that never really ends because we will have adversaries on the other side who are trying to evade or exploit our systems. And so we have to constantly be ready and be proactive and keep learning so that we can get ahead of the next challenges that come ahead of us, and that is our job to do.

Do you think when Mark gave you essentially a blank check to do the work that you’re doing, was he in some way trying to salvage the essential idea of what he’s created, which is this idea of connecting everyone on earth to his single platform? Because it seems like almost a Hail Mary pass for the beauty of that idea, the idealism of that idea.

We know we have a responsibility. Facebook brings many good things into the world. But there is also abuse and we have to minimize the bad and maximize the good. That is our job and there will be challenges. There will be adversaries who will try to evade our systems. We have to be ready and it is our responsibility. It’s literally my job to continue to get ahead of those.