The Facebook Dilemma | Interview Of Nathaniel Gleicher: Facebook Head Of Cybersecurity Policy

The Facebook Dilemma | Interview Of Nathaniel Gleicher: Facebook Head Of Cybersecurity Policy
The Facebook Dilemma | Interview Of Nathaniel Gleicher: Facebook Head Of Cybersecurity Policy

Nathan Gleicher joined Facebook as the head of cybersecurity policy in 2018. He was previously a senior associate in a technology program at the Center for Strategic and International Studies. This is the transcript of an interview with Frontline’s James Jacoby conducted on September 6, 2018. It has been edited in parts for clarity and length.

Election Security

Is it basically your job to protect these elections coming up from foreign actors interfering with Facebook?

It’s a big team, but I think what we’re focused on is not just around elections, but any time you have open public debate like this, there are going to be actors that are going to try to manipulate that public debate. How do we figure out what are the techniques they’re using, and how do we make it much harder?

And how actually do you do that?

So two pieces, right? The first is we have a manual team of investigators and a manual team of investigators that is sort of searching for the bad behavior. We think of this kind of like looking for needles in a haystack, because it can take a long time. It can be really burdensome to run these investigations. It can take weeks; it can take months. But this is the most effective way to find the really sophisticated bad actors. You complement that with much more scaled work. With the ability to work – so a good example of this is, we’ve seen a lot of bad actors rely on fake accounts that they will use to conceal their identity and to drive their false narratives. Even though there are plenty of less sophisticated bad actors that use fake accounts, by scaling up our ability and taking those down, we make it much harder for them to use that tool.

So the first half is like looking for a needle in a haystack. The second piece is sort of shrinking the haystack and making that investigation easier, and you have to have both.

Going backward for one second, tell me the circumstances of you arriving here at Facebook. What were they? Bring me through the beginning of how you arrived here and why you were brought in for this job.

My career has been focused on how technology and society interact. Technology has this incredible potential to make things better, to encourage communication, to build out all the ways we talk to each other and understand each other. But one of the side effects of new communications media is that people are going to try to take advantage of them. That tension – how do you ensure that the positive benefits of technology can be realized? – is what I focused on in government, whether I was investigating cybercrime cases or I was working at the NSC, the National Security Council, thinking about cybersecurity policy.

Here at Facebook I’m able to work on a very similar problem, right, which is social media as the public square for political communications and public debate. How do we ensure that that public debate can be open, can be free and fair?

Without being interfered with by people that want to mess with things.

Right.

Facebook’s Response To The 2016 Election

… What was your thinking coming in here, if you can kind of bring me into if there has been an evolution. Here you are: You’re at Facebook; it’s in the midst of – what did you walk into exactly, and what were you thinking about it?

There are a number of different teams that work on information operations or manipulation within the company. Part of what I think we wanted to do with me joining was how do we connect all those efforts, and how do we think about a strategic layer that connects the Threat investigators, the legal teams, the product teams, all the different efforts that are trying to combat different parts of this problem? When I joined, it was really about how do we pull these pieces together?

If you had to give an assessment as to what the priorities were at that time when you came in, set the scene. Where are we in time? It was seven months ago. What was going on? What the atmosphere here like? Because you’d been outside of this company; you’d been in government. And I’m curious, I mean, I don’t work at Facebook: What’s it like walking in from having had the experiences that you had – what’s it like walking in here?

When I joined seven months ago, we had done some high-profile takedowns of the IRA [Internet Research Agency], of assets that they were running, and I think the question was how do we scale that ability to look for and disrupt foreign actors and any kind of malicious actors so that we can deal with all the types of threats that are manifesting on the platform? How do we build out our capacity, how do we expand our teams, and how do we connect everyone inside the company so that we can run this more quickly, more effectively and repeatedly?

You mention the IRA. Are there people working here every day on figuring out what the IRA is doing, what their playbook is? Give me a sense as to who’s working on what in terms of these, people that had meddled in 2016 and what they did there and what they’re doing now.

We have a core team of investigators that are focused on threats from all sorts of bad actors. Yes, we have investigators that are focused on IRA-style malicious actors. We have investigators that are focused on non-state actors. They’re constantly thinking about what are these actors trying to do, what and how can we stop them, and how can we make it harder.

So would you have considered what the Internet Research Agency and what the Russians did in 2016 campaign, did you come in thinking that was a sophisticated campaign on Facebook, what they ran?

What do you mean by that?

What I mean by that is were they really clever in how they operated and how they gamed it?

They were using a new medium in a way that no one had seen before, right? I think we have learned since then all sorts of ways in which they didn’t conceal their identity as much as they might have which has enabled us to really map out what they were doing. But the way that they used this new medium, we’ve said it was a surprise to us. We’ve said we didn’t see it fast enough; government didn’t see it fast enough. I think what we’ve all been focused on is as we see the ways these bad actors are trying to use the platform, how can we get ahead of that?

And having come out of government, what years were you in the Obama – what years were you at the NSC?

At the NSC from 2013 to 2015.

So during that period of time, the Russians and the Internet Research Agency, they were waging disinformation campaigns in Ukraine, for instance, on social media and in part on Facebook. Was that something that was on the radar screen of government at that point?

I mean, we’ve had a heavy focus on the activity of threat actors like Russia. Often it’s been more focused on traditional cybersecurity threats, right, espionage-related activity.

So is that a yes? Was that something – was the activity of the Internet Research Agency, was fake accounts, was the tactics of spreading misinformation and disinformation, was that something in government, was that something that was a priority or was being studied?

That wasn’t a focus of my work in government.

Cybersecurity In Government And The Private Sector

In terms of what kind of relationship there was between the government and tech companies at that point in time, was there good information sharing going on between, for instance, the White House and the tech community about potential threats?

Information sharing is always a little challenging when you think about how government and the private sector should work together, because the two communities have different capabilities and need different kinds of information. And figuring out the right way to do that is sort of an ongoing challenge. It’s one of the things that we were focused on when I was in government – how can government better share information with the private sector? – and it’s one of the things we’re focused on here at Facebook now. How can we work with government and get information from them and share information with them to be as effective as possible?

I’m curious just historically, because it’s sort of a black hole in terms of the record. There were articles written about the Internet Research Agency in 2015; there was already reports even as early as 2014 about the disinformation campaigns coming out of St. Petersburg, and DARPA [Defense Advanced Research Projects Agency] had a program that was studying how social media could be weaponized. I’m just kind of curious about was this something that was on – I mean, how was it not on the radar screen?

It’s using a new medium in a different way, and I think we’ve been focused, we were certainly focused on the more traditional cybersecurity threats. I think government was focused on the more traditional cybersecurity threats.

So when you come in here, then, with the purview of understanding things from the outside, how organized were things in here actually to deal with the new threats that were coming up?

When I joined Facebook, you mean?

Yes.

We have several different teams that work on aspects of this problem, and they were all working together. I think what we wanted to do was build up that capability, right? We had already done this disruption, the major disruption from last fall. We’d already started to scale up substantially our ability to find and take down these actors, and the goal was how do we accelerate that, and how do we make it so we can do it not just more quickly, but more effectively?

And how is that done? What does that mean for a layperson? What does that actually mean to scale up, and what kind of challenge – inject me into what you have to face every day.

There’s a core team of investigators that are looking for this bad behavior, right, and they run constant analysis. What do they see on the platform? What sorts of suspicious behavior are they seeing? Particularly focused on tactics and techniques that we’ve seen in the past or that are rooted in maybe a tip we might get from law enforcement or an indication we might get from an outside partner like FireEye or the Atlantic Council. They take that information, and they build that out into a full picture of what’s happening, right, and we work to try get an understanding of the operation. Then as we do that, we reach a point where we know enough about it that we can take action, and we can try to take all that infrastructure down; we can try to eliminate the bad behavior.

And what sorts of bad behavior have you been detecting in the past seven months?

We focus particularly on what we would call coordinated inauthentic behavior. What I mean by that is essentially a group of accounts, a group of individuals that are misrepresenting who they are on the platform. They appear to be, for instance, independent, but they are in fact coordinated surreptitiously by one particular actor in the background.

And who are the actors that you’re discovering these days?

We see states. We’ve talked about content and activity that emanates from Russia that emanates from Iran with links to Iranian state media. There are non-state entities. We’ve done takedowns, for instance, in Brazil that was linked entirely to domestic behavior, a network of apparently independent news organizations that were coordinated in the background by domestic entities. It’s a wide range of actors.

How many investigators do you have?

We’ve been radically increasing the size of our teams working on security in general. We’ve doubled it from 10,000 to 20,000 people across the company. But as far as the investigative team, we don’t share particular team sizes.

Why not?

Because there are a lot of different people that work together in this space, and thinking about one team in particular kind of misses the point.

How does that miss the point, though? In asking, just it seems like a basic question to me. I could ask the FBI how many people they’ve got investigating cybercrime, for instance. They’d probably tell me, so why can’t you tell me?

Because our Threat investigative team works really closely with, for instance, our Community Operations team, who supports a lot of those investigations. That’s why when we say we have first 10,000 at the beginning of this year and 20,000 people at the end of this year working on security, that really is a meaningful number, because to do this kind of operation, you need all of the different players.

Can We Trust Facebook?

OK, but it’s interesting, because in the interviews over the past couple of days, everyone’s citing the 10,000 to 20,000 mark, and that can include content moderators; that can include it seems like a wide array of people. … How are we supposed to know, considering the gravity of the task that you have, which is essentially protecting one of the largest information sources for American voters in this upcoming election – how are we supposed to trust that you have all of the resources at your disposal that you need to be able to catch bad actors?

I think one of the best ways to look to that is to look at the pattern of disruptions, takedowns, we’ve done over the past several months. If you look at our takedowns from earlier in the year when I joined to now, there’s been a pretty substantial uptick in pace. These are manual disruptions, so the numbers are always going to be small, because what we do is we’re looking for the most sophisticated bad actors to take them down, but they’re coming much more quickly. And what’s important about that is when we take these actions, it’s not just us. The other technology companies are taking disruption actions as well. We see government providing us with support. As the community comes together, this isn’t a problem that any one company can tackle, but when all of us come together, I think we really do have a substantial uptick in capacity to disrupt this bad behavior and take it down.

And for the public, what measure, what standard should we have for you in terms of your success in protecting this upcoming election and the integrity of it?

We have and I have a responsibility to the users on the platform to make sure that public debate can be open; it can be authentic. That’s my goal.

What if it doesn’t go well? What if in a hypothetical, what if there are bad actors that interfere in the upcoming election on Facebook and other platforms potentially? What is the public supposed to do at that point? What is the public supposed to expect of Facebook? What’s the accountability if things don’t go well?

We know that there are bad actors trying to target public debate on social media continually, right? Our teams would never suggest that you can – let me start that over – we know that there are bad actors. We know that there are bad actors targeting debate, and I think our goal here is how do we identify the core behaviors that they rely on, for instance, using fake accounts, leveraging advertising, operating in the shadows without transparency and make those behaviors much more difficult.

But look at it from my perspective, or look at it from the public’s perspective for a second. If this doesn’t go well, if there’s a problem in the midterms that happens on Facebook, how are we supposed to hold Facebook accountable if the platform again is gamed by malicious actors?

What we’ve done consistently is we have identified the bad behavior. We’ve investigated it as quickly as possible, and we’ve moved to eliminate it, and not just that but to publicize it, to make sure that people understand what happened. One of the core things we all need to do is to understand better the bad behavior that’s occurring and the ways that these actors are trying to manipulate public debate so that we can make it more difficult going forward.

Do you think that answered my question? I mean, I’m serious. Honestly, this is a really important question in terms of if things don’t go well, and I certainly hope that they do, what recourse does the public have for what is essentially kind of an information utility for so many voters in this electorate? What are we supposed to think? What are we supposed to do?

Public debate is really essential, and I think the most important thing here is to make sure that it does go well, and that’s what we’re focused on. What we’re focused on is how do we have the resources, how do we have the teams, and how do we have the capability to put this together. But this is an effort that we need many organizations to work together on. I think Facebook has a critical piece of this, just like the other technology companies do; I think government has a critical piece of this. The public has a component of this, sort of understanding what’s happening in this space. Media has a piece of this. All of us have to work together to make sure it goes well.

Cybersecurity In Government And The Private Sector

How are you coordinating with government? Is there a coordinated effort right now in terms of election integrity?

When we’re getting ready to, for instance, take a disruption, we work with government to make sure they understand what’s happening, to make sure that law enforcement is in a position to conduct the investigations they need to conduct, to run the activities they need to run. We regularly hear back from government when they see things that might help us take our disruptions and take action against these bad actors, so there’s a constant dialogue.

Do you see Facebook, for instance, as a sort of first line of defense in terms of, the federal government can’t monitor behavior on social media, so we have to rely upon a company like Facebook to monitor the network for malicious actors, right? Isn’t that correct?

We and the government have different tools to do that, right. Government has a set of capacity that’s really important. They are the best place, for example, to understand the intent behind a state actor and the reason they’re engaging it the way they’re engaging. Facebook and the value that we can bring is our ability to understand what’s happening on our platform and to investigate bad behavior on the platform.

But government doesn’t have a view into what is happening on Facebook’s platform on American citizens right now, for instance.

Government has the view that any user of a platform would have, although of course law enforcement, when they provide us with lawful process, we respond and provide them information. So we regularly work with law enforcement to enable them to conduct their investigations.

Are you getting tips from law enforcement, for instance, from the FBI about potential active operations that are playing out on your platform?

Yes. A really good example of this is the disruption we did in late July when we took down 32 or so assets that showed some links to Russia; that in that case we talked about where that came from, and it came from three sources. One was our own analysis and investigation; second was support from outside researchers; and third was information from tips from law enforcement.

In a bigger-picture sense, isn’t there an aspect of this where we kind of just have to take Facebook’s word for it; if you take down X number of accounts, we have no idea how many other possible accounts there are, and you’re only doing as well as you say you’re doing, right? So there’s no way for us to audit how well you’re doing or how well you’re not doing. Doesn’t that put the public in an odd position to just have to kind of take your word for it?

Transparency and understanding this stuff, which is I think what you’re focused on, is really important, understanding the type of bad behavior and how it’s being acted against. Part of what we’re trying to do there is to work more with the cybersecurity community, so we’ve worked recently with both FireEye, but before that with the Atlantic Council, and the goal of those relationships is that those experts who are outside Facebook can conduct their own analysis of the content and of the activity that we’ve identified and that they’ve identified, and help the public understand more what is going on. So if you look, … the Atlantic Council provided a pretty detailed analysis of the takedown from July, and FireEye provided their own detailed analysis of the more recent takedown.

Wasn’t it the Atlantic Council that actually found those accounts?

I think you’re talking about FireEye, which found some of the accounts. In the takedown in July, which involved both Iran and Russia, there were four separate investigations. One of them was a tip, [came] from a tip that was provided to us by FireEye. The other two investigations into Iran were the result of our own internal investigations, and the Russia one similarly was the result of our own internal investigations and work with law enforcement.

Can We Trust Facebook?

… One of the interesting ironies here is that in order to find bad actors on the platform, are you having to do more surveillance of Facebook users more generally?

I think one of the important things – and this gets to your point – there’s a lot of focus on the nature of the content that these threat actors are sharing, and in the debate, there’s this heavy focus on misinformation, what’s being said, whether it’s true or not. One of the important tools that we focus on is rather than looking at the content, look at the behavior. For instance, the use of fake accounts, right? This is a pattern that is a clear violation of our terms of service, is also a clear indicator of inauthentic engagement that doesn’t involve looking at the nature of the speech or considering what type of debate is happening.

For the simple forensics of even finding fake accounts, do you have to augment your surveillance efforts of the Facebook community in order to find them?

We use the same tools to identify these types of malicious actors as we use to make sure that we’re stopping bad actors that are driving, for example, hate speech or bullying or child pornography.

So it doesn’t require any more data analysis or looking at what could be someone who’s not a bad actor but finding out more about him or her? It doesn’t require more data than previously in order to find bad actors?

I mean, our invest –

We know how investigations work, so you need more intel in order to find bad actors. That’s just kind of the nature of it. So is that not what’s going on here?

Well, I think there’s two pieces that are important. The first is our investigators work within a very rigorous ethical framework of when they look at things and when they don’t in the course of an investigation to ensure that they’re minimizing any kind of impact of the sorts of investigation you’re talking about. The other piece is when we’re doing the broader analysis, when we’re looking at pattern matching, when we’re trying to identify these bad behaviors, we look at – these can be sort of anonymized behavior patterns. Are these organizations using large amounts of fake accounts? What sort of ads, and where are these ads coming from? These are indicators that can help us understand where to start looking.

Facebook’s Challenges For The Next Election

What’s your worst-case scenario for Election Day?

Our worst-case scenario?

Yeah.

I think we all know some of the things that are going to come and some of the things we expect these actors to use. I think from my perspective, I know that they’re going to do things I haven’t thought of, and we know that they’re going to do things we haven’t thought of. Trying to guess every single thing is going to mean we’re always one step behind. What’s more important is to have a process in place and to have a system in place so that we can identify these things early and we can move very quickly to respond.

I talked earlier about this idea of looking for needles and shrinking the haystack. Putting those two types of efforts together is really important here, because when you look for needles, when you’re looking for the most sophisticated bad actors, those are the ones that are going to try new techniques first, and having this sort of core team of investigators focused on that means you will get the earliest indications of new techniques like what you’re talking about. Then you can scale those up to have a much larger impact across the platform.

Are you able to talk to me at all about what new techniques you’re seeing bad actors use here?

One of the things that a number of people have focused on and that we’re certainly focused on is manipulated media. People talk about; people call them deepfakes, whether video or it could also be photographs or audio that is untrue and has been manipulated to appear legitimate, right? This is another kind of misinformation that we can see being used and utilized, so this is one of the things that we’re focused on. How do we detect it? How do we understand it? How do we ensure that our policies are ready for it?

And you actually have an ability to detect it?

We’re working on an ability to detect it. I think one of the things that’s important here is there is no silver bullet for something like this, and technology automated detection is never a solution by itself. It’s a part of the solution, which is why we have partners like third-party fact checkers that can help us understand the nature of what’s out there; which is why we have internal teams; which is why we have the movement from 10,000 people to 20,000 people, so that you have automated tools complemented with human analysis.

What’s your confidence level going into the midterms then and Election Day in terms of how well prepared Facebook is to deal with bad actors?

We have been laser-focused on this problem, and I think the best way to measure our confidence level is to look at the impacts we’ve had. From my perspective, one of the most critical things in any crisis situation, which is what we’re talking about here in an election, is do you have the relationships, do you have the pathways and channels to make sure that you can move quickly when that happens. One advantage that we have is that we’ve been doing these disruptions, which means we’ve been exercising this process and these pathways, both inside the company to make sure that all the teams are working together in the way they need to, and also outside the company, because for everything that we do, we’re only one piece of the puzzle, and making sure that we can get information to and from government as we talked about quickly, to our partners in the private sector quickly, to the research community quickly, that’s just as important as everything else.

And what about a war room? Here’s a scenario, right? Election Day comes along, and all of a sudden there’s a piece of false information that there had been a hack on voting machines in a congressional race in Ohio somewhere, and there’s this piece of misinformation that actually doesn’t happen, and it’s starting to be passed around on social media. What preparedness is there to deal with something like that or if there’s any sort of information that’s being seeded all throughout the social network that’s trying to sow distrust in the actual process?

We have policies in place to act on information like that. But it’s really – I think the question you’re asking is bigger than that, right? First is, how do we learn about it, and how do we learn about it quickly enough; and then second is, how do we act on it quickly enough to make sure that we’re mitigating any consequences? This gets back to my point: We need to know and we need to have communications and connections with the people in the states who are going to see these things happening, with the people in government who are going to see these things happening, with all the different places where this information will bubble up, so we can learn it as quickly as possible.

And you’re going to be able to do that in real time?

You said a war room, and I think that’s a good image, but it’s important to remember we’re not just talking about a bunch of people sitting around a table. What we’re really talking about is distributed communications to make sure that the contact points are established so that this information can flow through all the different teams that need to work on this, and that’s both inside the company but also with all of our partners externally.

Is there going to be real-time monitoring on Election Day of what’s going on on Facebook, in the conversation, as you put it, in the public square of Facebook on Election Day, and how are you going to actually find things that may actually sow distrust in the election?

Absolutely. We’re going to have a team on Election Day focused on that problem, and one thing that’s useful here is we’ve already done this in other elections, with recent elections that have come up.

Whether you’re talking about Mexico, recent elections, we’ve been able to sort of build out our understanding of how best to do this.

And you’re confident you can do that here?

I think that – yes. I’m confident that we can do this here. Now, I think this is a really hard problem, and I think, as I said before, the potential for some sort of bad behavior is very high. There will be some kind of problematic behavior. The question is, do we have the teams and processes and resources in place, and does the community have the resources in place to respond to that quickly when it develops?

Regulating Big Data

If there were a legal standard, for instance, a legal standard that Facebook has to take down deepfake videos, for instance, would that incentivize you any further to actually get it done?

You’re talking about regulation here, right?

Yeah, sure.

Yeah. I think in the context of regulation, we’ve said very clearly that for us, it’s not a question of whether there should be; it’s a question of can we have the right regulation. If you take an example like the Honest Ads Act, this is something where we have actually already implemented the controls that it would envision even though the law, which we support, hasn’t been signed into law yet. So to your question about incentive, we’re focused on this problem, and we’re driving as far as we possibly can to do it, and what we’ve decided is rather than waiting for regulation, we need to put in place everything that we believe needs to be there and that we can do as quickly as possible.

The U.K. Parliament, for instance, there was a committee there that studied the fake news problem and a lot of other problems that happened on Facebook, and they’ve recommended, for instance, that there be legislation that holds Facebook liable if they don’t take down the content of bad actors. What does that sound like to you? Is that actually a feasible piece of legislation that would put the liability on a private company like Facebook to take down bad actors?

We have to be really careful here in this balance, right? Open public debate, free and fair elections are the cornerstone of democratic society, and the last thing we would want is a drive toward protecting them that actually results in mitigating, in eliminating, in challenging free speech. There’s a balance here, and the thing that I worry about is making sure that we can strike that balance.

Explain that to me then. You think that if you were mandated to take down bad content that that would somehow mitigate free speech?

I think that the speed at which you react and our ability to analyze is really important. We have positioned ourselves to be able to identify and take down bad content, harmful content, particularly content that violates our Community Standards, now under our policies, with incredible speed.