The Facebook Dilemma | Interview Of Alex Stamos: Former Facebook Chief Security Officer

The Facebook Dilemma | Interview Of Alex Stamos: Former Facebook Chief Security Officer
The Facebook Dilemma | Interview Of Alex Stamos: Former Facebook Chief Security Officer

Alex Stamos was the chief security officer at Facebook from 2015-2018. He is currently an adjunct professor at Stanford’s Freeman-Spogli Institute, a William J. Perry Fellow at the Center for International Security and Cooperation, and a visiting scholar at the Hoover Institution. This is the transcript of an interview with Frontline’s James Jacoby conducted on September 4, 2018. It has been edited in parts for clarity and length.

Election Security

In the Lawfare piece, you basically lay out a number of pretty astounding statements about what we’re facing going into 2018 midterms. If you could encapsulate what it was that you’re trying to convey right now to people about what we’re facing in 2018 and our preparedness generally?

One of my big concerns is we’ve had two years since the main part of the Russian attack against the 2016 election, and very little has been done as a country, as a government, to protect ourselves.

There have been fixes here and there, right? The tech companies have changed their policies, have staffed up; there’s a Foreign Influence Task Force in the FBI, but overall we have not seen the kind of all-of-society reaction that we really need if you want to prevent this kind of thing from happening again. What I think that leads to is we have signaled to the rest of the world that interfering in our elections is something that we won’t really punish or react to, and that’s really scary, because what the Russians did was not that sophisticated. The hacking part was pretty off-the-shelf kind of technical tools.

The disinformation component really just required people with basic English skills and the ability to read political blogs. You’re not talking about techniques that are incredibly difficult to recreate, so one of my big fears is that we’re going to see other U.S. adversaries – Iran, North Korea, China – jump into the information warfare space in 2018, and especially in 2020.

In terms of the large-scale thing that you would have liked to have seen done, what are you referring to specifically?

I tried to lay out four areas that we need to work on. First, we need better standards around online advertising. One of the frustrations that the tech companies have been dealing with is that the laws that control online advertising were written for TV ads, newspaper ads, so there’s no real guidance of what is considered acceptable controls and transparency around political ads online. We need Congress to clarify those rules. There’s a law that’s been proposed called the Honest Ads Act. I think that’s a good start. I proposed a couple more changes to it. I think we need to have better technical standards.

One of the great things that would be nice to see would be the ability to see all of the ads run by a campaign or a PAC in one place, but to make that happen we need to have standardization across all the ad platforms. The other thing we need to think about is how finely do we want to allow our politicians, our PACS, our political parties, to divvy up the electorate. This is something that Congress has never addressed, and right now it is legal to target individual people if the tools allow you to do so with a message just for them. I think that, with Russian interference or not, that is a bad direction for our democracy to go. We do not want politicians or political parties to be something different for every single voter in a district.

I think one of the easier solutions to that is to come up with standards for the minimum size number of people you can advertise to online for a political ad, so the first step is Congress needs to pass those rules. We need as a country to think about how we’re doing cybersecurity defense. The United States does not have a single body responsible for defensive cybersecurity. We have these really competent offensive units at the NSA [National Security Agency] and U.S. Cyber Command. They don’t do defense very much, and they don’t operate domestically. We have DHS [Department of Homeland Security], but DHS, their cyber component has not been seen as very competent by people in the cybersecurity industry, and they are super-focused on the power grid, dams, the other kinds of critical infrastructure components.

As a result, the de facto defensive agency in the United States is the FBI. FBI is a law enforcement agency. They do do prevention. That’s obviously something that they’ve pivoted toward after 9/11 in the terrorism context. But right now we are seeing indictments for things that happened two years ago. That is not the pace at which you have to operate if you want to actually prevent attacks in the first place, so one of the things I’d like us to see is us to look at some of our allies and how they do this.

During the German and French elections, after the U.S. election, we had partners in those governments that we could work directly with. The French have an organization called ANSSI [National Cyber Security Agency of France]. The Germans, it’s called the BSI [German Federal Office for Information Security]. And in both cases those agencies are independent from law enforcement but still have access to the intelligence resources that law enforcement has, and they’re considered neutral enough to work with all the political parties and to work with the tech companies. We just don’t have a convener like that, and I think that’s a problem not just in election security but in cybersecurity over all.

We really need to have a single agency with a technical competence necessary without missions, like the law enforcement mission that make it difficult for them to move quickly.

The tech companies like Facebook, are they basically the first line of defense for this 2018 midterm election in terms of foreign interference with the elections on social media?

Yeah. I think it’s interesting. The tech companies are basically acting in a quasi-governmental manner, right? They are doing the things that you would expect the government to do in a non-online context.

It’s because the companies operate the platforms upon which the activity happens. The companies have access to data, they have access to resources, and they’re not constrained by the First Amendment. When you’re fighting disinformation, the constraints that are placed on U.S. government agencies are actually, I think, one of the difficulties they’re facing, and that doesn’t exist in the private sector. So yeah, the tech companies are the first line of defense on disinformation, but the attack in 2016 was much larger than just disinformation on social media. The first line of defense against the kind of attack that the GRU pulled off, the main intelligence directorate of the Kremlin, the first line of defense there are the IT people working at the political parties working on the campaigns, and that’s an area where a lot of improvement is still possible.

In terms of preparedness, though, for – I mean, you’ve recently left Facebook. Do you think Facebook is actually fit to the task of protecting against disinformation or influence campaigns from foreign actors?

I think Facebook has taken reasonable steps based upon what happened in 2016. There’s two issues. One, we are always going to be vulnerable to some type of disinformation as long as we live in a free society. We don’t license the press; we don’t require people to have government IDs to get accounts on social media. Those are the kind of totalitarian steps you would have to take if you really want to prevent any kind of disinformation.

While I think the steps are reasonable, there will always be the possibility that people are going to push disinformation on the platforms. The other issue is, we’re not really sure what’s going to happen in 2018 to 2020, so while everybody’s been focused on the exact Russian activity in 2016, the goals of the Russians have changed, and I suspect the mixture of countries that are going to get involved has actually broadened. If you’re talking about a bunch of different adversaries all with different goals, we might see very different techniques to manipulate the election.

The Power Of Internet Platforms

Does it frighten you that we have to basically entrust a private company to do this, to basically kind of police their platform in this way and protect the electorate from disinformation?

Yeah, I think it’s frightening. I’m concerned about what we’re asking the companies to do. It’s reasonable to say that they should take basic steps around transparency. I think it’s reasonable to ask for the product to be designed to make misinformation hard. I think when we start to get to the point of where the companies are deciding what is and is not truth, who is a newspaper, who is a legitimate news outlet, that’s a really dangerous place to be.

These are very, very powerful corporations. They do not have any kind of traditional democratic accountability. And while I personally know a lot of people making these decisions, if we set the norms that these companies need to decide who does and does not have a voice online, eventually that is going to go to a very dark place.

So I do think we have to have a balance here of asking them to do things that are reasonable and appropriate based upon their power and reach without asking them to control speech in a way that is going to be in the long run I think a big mistake.

You say a deep, dark place. What do you envision here, if you’re actually asking these platforms to take on more responsibility?

One of my concerns is right now, content moderation on all the big platforms has a significant human component, right? That naturally restricts the ability of them to do something truly awful from a moderation perspective. That’s not going to be true five years from now. Five years, these decisions will mostly be made by machines.

The speed at which machine learning is progressing at text recognition, voice recognition, understanding the context of statements, pretty soon somebody will be able to type in a phrase of “Stop all the content that supports this candidate”; “Stop all the content that is on this side of a position,” and then that will be enforced at computer speed on every piece of communication on these platforms. I think setting the norms now when we’re in an imperfect spot when the platforms do not have perfect moderation, it’s important to put the limits up now so that when we get to the place where machines are perfect, we have not set the standard that half-trillion to trillion-dollar companies are deciding what is and is not news, what are appropriate ideas to hold, and what are not.

I can’t think of a situation in which a democratic society has put that much power in the hands of so few people. There are downsides to social media allowing anybody to be a publisher; I think we’re dealing with those downsides. I think there are more significant downsides in turning the platforms into really tight chokepoints of what is acceptable discourse.

I want to make sure I really understand this point, because it’s really important, especially about what you were saying about the future with AI. Explain it to me as if I’m a kindergartner, frankly, honestly. Like what is this…

You’re really not speaking well to PBS viewers.

No, but explain to me what this scenario would be if you’re empowering these companies to make these choices and then they’re empowering a machine eventually do to that. What’s the risk there?

You know, the last couple of years we’ve seen some really positive societal change from movements driven by individuals, right? – the #MeToo movement, Black Lives Matter. The abuse of women in the workplace has been a problem forever, for certainly decades in the American corporate world. Police brutality against minorities has been a problem for decades. But we didn’t hear about that 20 or 30 years ago, and that’s because 30 years ago, a very small number of people decided what was newsworthy, and I don’t think we want to do that again. I don’t think we want to go back to a place where a very small number of people get to decide, “This is what we are going to discuss as a society, and these ideas are completely off-limits or considered not appropriate for people to discuss in public.”

In the mass media era, that was an active thing. People were deciding what to cover, what to put on the front page, what to assign reporters to do. In the social media era, it is a negative decision. It’s a decision to silence the voices of others. To put the genie back in the bottle through telling the platforms these are ideas that should not be allowed to be discussed, if you create those levers of power, eventually the hand of somebody who you don’t want is going to be on that lever, right? I think we [have to] just be really careful what levers we create right now.

We have some critical issues to deal with as a society, but in the long run, dealing with them by building some of the most powerful censorship regimes known to the history of mankind, I don’t think that’s a response that’s really going to have a long-term positive impact. The other concern I have is if you look historically, control of speech almost always in the long run benefits the powerful, right, and the Internet is one of the great democratizing tools that we’ve ever had in the history of our species. It allows people to have a voice that they’d never had before.

When you look around the world, you’ll see that there are societies all around the world [in which] people have never had the ability to talk about what they want from their government, and they’re getting that for the first time. That’s one of the things I think that frustrates me about this conversation is we’re only thinking about it in the United States or in the Western context, but something like 90 percent of Facebook’s users live outside the U.S., and at least half of those live in either non-free countries or in emerging democracies that do not have traditional free expression rights.

We’ve got to be real careful of solving our problems in the U.S. by creating norms that when applied in the developing world are going to be horribly oppressive and will be used to sustain autocracies to suppress democracy movements, to suppress women’s liberation movements; that that’s what’s going to happen. And we’re already seeing that, right? Germany is probably the country that’s the furthest out there in the Western democratic world of regulating online speech, and their bill, the NetzDG bill, has been used as a model by totalitarian countries to demonstrate, oh, it’s OK to control hate speech or any kind of speech that is disruptive. Their definition of hate speech is obviously very different.

Facebook And Authoritarian Regimes

Right. But aren’t we seeing at the same time totalitarian or authoritarian governments using Facebook, for instance, right now for their own aims, basically?

I’m not here to defend Facebook, and I’m not comfortable you putting me in that – like I’m – that’s not –  One, totalitarian regimes already have control of the media. They control the press; they control the newspapers. They often do not control the voice of the people online. Yes, you can see totalitarian governments putting out their message via social media, but at least in social media they have the opportunity to be swamped by the voice of the people. And that is something we give up if we decide that governments get to decide what the Overton window is around political speech anywhere. Again, I’m not here to defend Facebook…

Well, that’s OK.

There’s people tomorrow who are getting paid to do that, right?

I understand that. But I’m wondering whether, when you were at Facebook, was it on your radar that there were regimes that were gaming the system and harming potentially innocent people and that were essentially weaponizing the platform in certain ways to harm Facebook users?

Yes.

Was that part of you purview at the time?

Yes. And we had a dedicated team whose entire job was to study the use of the platform by governments to cause harm, and that could be through direct attacks like malware and spear phishing to take over people’s accounts and steal their information. It could be via disinformation pushes. For example, we disabled all of the sites at the Internet Research Agency that were operating domestically in Russia. The Russian disinformation apparatus is aimed at their own people as often as it is aimed at foreigners. You do see governments doing that.

But I think in the long run, the ability of individual people to have voice puts them at a much closer footing than traditionally to powers that only governments have traditionally had, right? In the mass media era – and this is still what we’re facing now – when you see a disinformation attack domestically in a country, it is often pushed by the state-aligned media, is amplified online, but their ability to control what the official narrative is what the Overton window of conversation is. That ability is incredibly powerful.

Tell me if this is going off topic for you, but we have been interested in the Philippines, for instance, and in the Philippines, President [Rodrigo] Duterte has used Facebook and his media apparatus as a way to kind of quash dissent against some of his policies.

Yes.

So is this something that was on your radar, and was there something that Facebook could have done about this earlier in terms of how you combat an authoritarian government’s use of Facebook to quash dissent, for instance?

Well, why don’t you define “quash dissent“? There is a government-supported media, and they have access to social media, and then there are individuals who are aligned who are putting out propaganda on behalf of the government, and I think that’s always going to be true, yes.

You also then have in these countries the ability for the first time ever for people who disagree with the government to also have voice. A lot of the stuff you’re reading about in the West is only possible because [of] those activists, those NGOs. But the truth is, social media is never going to be a perfect democratizing force, right? You’re not going to magically wipe out these power structures that have existed for decades or centuries by giving lots of people cell phones. There will be an organized attempt by governments to then use the same technologies to cause oppression. I think in the long run you have to try to design your product to make it difficult for them to suppress those voices, and it’s really important then to have the policies in place so that you can’t officially assist them.

That’s actually something that I don’t think people discuss enough, is how much the Western tech companies had to stand up against data requests from these countries, which is why there’s this dance that happens between the Irish [General] Data [Protection] Regulation [and] U.S. regulations to push back and make sure that data is not given over about democracy activists and such. That’s a battle that happens every single day.

… In the Philippines, a well-known journalist, Maria Ressa, has come out and said that they’ve studied how social media has become a weapon for the Duterte regime and has blamed Facebook, for instance, for having not done anything to really combat that; that there is a campaign apparatus that he set up when he was running for office that then was turned into kind of a weapon to go after his critics.

What can Facebook do to stop something like that? Is it an obligation of Facebook to do something, stop something like that?

I think there is an obligation to make sure that people aren’t able to amplify these messages inappropriately, but you’re talking about governments that do have a support of at least a plurality of the population, and those people have a voice, too. I think it’s very dangerous to go down a road where you’re asking U.S. tech companies to overthrow governments that are seen as not being democratically representative or not being – the role here should not be to oppose the government; it should be to try to build a product that gives a voice to the voiceless and that does not allow the powerful to overamplify their message.

But that’s a tough part, right? It’s true. If you have millions of followers and you’re able to motivate those millions of followers to say things online, then that gives you a lot of power. Does it give you more power than when you controlled the only four newspapers and only three television stations? No, I don’t think so, right? And I think you can’t measure what happens now against some imaginary world where only the good guys have any voice. You have to measure it against the reality of what we were facing 20 years ago, and the truth is that overall, these tools have given individuals a much better chance at standing up against these autocrats.

That doesn’t mean it’s perfect, and I think there’s lots more that companies can do, and I’m not here to defend any specific activity. But we’ve also just got to be realistic about what somebody like Duterte was able to do 20 or 30 years ago when they controlled the entire media environment, which is not true there.

This will be the last thing on it, but I think it’s more a question of whether the technology is enabling what’s happening, right? It’s one thing if an authoritarian controls the media in his or her own country. It’s another thing if a foreign tech company and their technology is enabling, and it’s become a tool for oppression for –

Right. Television is a foreign technology for most of the countries in the world. Newspapers, invented by the Chinese, is a foreign technology for us. I mean, you can change the media, and you can change how this information gets out, but you’re always going to have a problem with autocrats having control, especially autocrats that have a large population of followers.

Even though for instance, … it’s an American company in places like the Philippines. It’s not a Filipino company.

Right.

It’s whether this American company is going to help enable a foreign leader who may be –

Well, define “enable.”

Well, if there are for instance, if there are fake accounts that are amplifying his message.

Then absolutely those should be taken down. This is where I think things break down a bit. I absolutely think that the companies have responsibility to prevent the amplification of messages, but we’re not – in most of these cases, you’re not talking about that. You’re talking about a large number of people pushing a message in support of an autocrat, which is a very difficult problem and a very dangerous one to say that a bunch of Americans in Menlo Park, [Calif., Facebook headquarters],should decide to change actively, right?

There’s a difference between trying to design a product to reduce the ability of people to amplify and taking an active role in changing a society. I think that kind of engineering from afar that – people use the term “digital colonialism” in a number of situations. That is an absolute digital colonialism, like the idea that Silicon Valley should try to change the Philippines actively. I think we’re going to end up in a really negative place, and the United States has learned that. [During the] 20th century we spent a lot of time overthrowing leaders who we thought were not helpful to us and then dealing with the consequences, and I don’t think that you should substitute tech companies for the CIA, for example.

Facebook And Russian Disinformation

Let’s go back in time a little bit. You started in 2015 at Facebook?

Yes.

OK. What were the main risks or threat factors basically at the time when you started? What were the main things that, when you get into your role that you were concerned about security-wise?

The biggest risk has always been an attack against a platform to steal information, right? That is, people’s personal information is valuable in a number of different ways. It’s valuable to economically motivated attackers; it can be valuable to governments; it can be valuable to activists who are trying to embarrass folks from the other side.

Direct attacks to try to steal data is always the biggest risk and what you have to focus the most on.

Since I don’t know this story. Where would you begin the kind of process of discovery about Russia and about what had happened on the platform?

Russia’s disinformation activity stretches well before the U.S. election. They were involved in the Ukrainian elections; they’ve run a number of operations against groups in Europe, especially around trying to convince people that NATO is not a positive force in their country.

That seems to be one of the hallmarks of GRU activities. They’re very interested in splintering the NATO alliance. We have a dedicated team whose job it is to track persistent government actors on the platform. Part of them was dedicated to Russian actors, including the GRU actor that people are now calling Fancy Bear APT28. That group we had a good relationship with the government on, and we had a kind of back–and-forth on. If there was a situation in which they were targeting people in the West, we could inform those governments, inform those individuals, try to make sure that they’re safe. Their traditional technique is to drop malware or spear-phish individuals, steal their data, and then use that information to create scandals that would be harmful to them in the long run. They’ve also done things like invent scandals from whole cloth, although our group was less involved in tracking that kind of stuff.

In the spring of 2016 we first saw some signs that that group that we had been tracking was interested in the U.S. elections. Some of these traditional groups, there are well-defined processes to communicate that information to the government, and there’s a variety of private groups that track them as well. So we communicated what we saw, and we didn’t see any of the offensive activity on Facebook but a little bit of the reconnaissance activity. We reported that and then didn’t hear from them for several months.

If you could be a little more specific about what sorts of activity were you seeing them do, and then who are “they”? It was the second reference.

The GRU’s normal path here is first they want to learn about their potential targets, so they look up people that work for the organizations they care about; they try to understand their entire social media profile, what are their email addresses, who are their family members, and then they might try to deliver some kind of payload to take over the computer, take over their phone, or to take over their accounts via tricking them out of their password.

Those initial steps, the reconnaissance steps, often touch upon multiple platforms, including Facebook, so that’s where sometimes we’ll get an indication that they are interested in a potential target if we can tie an account that is being used for reconnaissance back to them. That’s the kind of stuff we saw in the spring of 2016 around the election.

And what were they doing specifically? Who were they targeting, and where was the activity focused?

I‘d rather not get into it. This is data that we turned over to the special counsel and you’ll see in indictments. It’s probably best for me not to talk about the specific activity.

And who was it? … You mentioned that you were turning over information at that point in time to law enforcement.

Right.

Who exactly were you turning the information over to?

We had a relationship with the cyber division of the FBI. They’re the ones who, in the U.S. government, coordinate between the intelligence community and the private companies on tracking these kinds of actors.

And what was the response you got back from the FBI at the time?

In these groups we have a pretty good relationship where we’ll get information from the FBI and we’ll send up. One of the issues that we figured out later is that nobody in our companies or in the government was tracking the disinformation groups.

In these traditional hacking groups that do offensive activities, you have this inner play and this partnership, but we didn’t have that on the fake news groups or the groups like the Internet Research Agency.

When did the Internet Research Agency come on your radar screen?

There was a great report about them in The New York Times in 2015 right before I started. We did do work to find that activity that had been referenced and some related activity to take it down in 2015. But for most of 2016 they weren’t on our radar.

After the election, we kicked off a big look into the fake news phenomena. At the time we didn’t know is this Russian activity? Is this some other country? What’s the mix of people who are doing this for money versus doing it for geopolitical reasons? What we found in that study was the vast, vast majority of the activity was financially motivated. We’re able to trace most fake news back to groups who are making money off of ads. One of the ways you can tell the difference between the financially motivated and the government-motivated is the financially motivated fake news groups take you off of Facebook, off of Twitter, because they need to get you to their site so they can make money off of ad arbitrage, whereas the government groups share memes and images and things that are easily copy–and-pasted because they want their message to spread, but they have no need to actually make any money.

What we found is the vast majority of the activity is financially motivated, but there was a component that we could not figure out who they were or what they were doing, so we took what we learned about those groups – and this is right after the election, fall, winter 2016, early 2017 – and then started to build protections around the French and German elections to find those kind of accounts and to kill them early.

When you’re saying “those groups,” just help clarify for me, which groups are you talking about?

So you’re talking about a mix of financially motivated groups, which include the canonical Macedonian teenagers but generally what you might consider light organized crime, groups of people who are creating the spammy content, doing ad fraud, who are working together to push that content on multiple platforms, and then a variety of clusters of activity that we didn’t think was financially motivated but we couldn’t trace back to a specific sponsor at the time.

That’s one of the challenges you face, is you can see that this activity is bad, but if they’ve done a good job of covering their tracks, you might not be able to figure out who actually sponsored them. That’s an issue that’s going to get worse as this arms race happens, is as we find these actors, they’re going to try to figure out how we figured out who they were and then cover their tracks better, and at the time we had these clusters of here is fake news activity that we don’t think they’re making money, but we can’t tie them to any specific country at this point.

Why weren’t disinformation campaigns really on your radar at that time before the election?

I think there’s an interesting organizational problem here in that the information security community studies people who are hacking other people’s computers. Either it’s hacking us directly or using us to attack other individuals. Those of us that work in security come from this world, right? Of our threat intelligence team, around half of them come straight from U.S. government intelligence agencies, so these are the issues we care about, these are the groups we’ve traditionally tracked, and this is the information that is available in the wider ecosystem either from governments or from private intelligence contractors.

There’s just no ecosystem around disinformation. I think part of it is, it’s a much smaller problem for only a small set of players, right? There are lots of hacking groups that can be seen in lots of different scenarios, but spreading computerized propaganda is something that only affects social media companies, other companies that do user-generated content. I think part of it was there had just never been a large enough ecosystem of people studying the problem that they would work their way into the teams.

And traditionally you wouldn’t consider this actually a security problem. My team got involved because the GRU actors are traditional cyber actors, and we were already tracking their activity.

Facebook And Ukraine

You mentioned Ukraine, for instance, right?

Yeah.

The Russians launched a disinformation campaign on social media in Ukraine.

Yeah.

So did that resonate with you at the time, saying, wow, this could actually be a factor more generally on Facebook? And what was your response to that? What was your thinking at the time?

We knew that there was a possibility of attack since the election. We were really focused on the government groups like GRU. We knew that they were active during the Ukraine crisis. We had taken action against a number of their accounts and shut down their activity. But at the time, we had not picked up on the completely independent disinformation actors. And to be clear, while there’s a lot of talk of the IRA [Internet Research Agency], I’m not sure their campaign was as effective. Their activity is much more broad and less focused and happened before the election, continued after the election.

Ninety percent of their content had nothing to do with any candidate. It was really about driving division in American society. The GRU activity is much more focused on what exactly is happening in a country at that time, and that’s what we were more focused on and what we had traditionally tracked.

And was anyone from government, from either the intelligence community or your liaisons at the FBI, was anyone coming to you during the election and saying you guys need to look out for disinformation campaigns or influence campaigns?

No.

Facebook’s Response To The 2016 Election

What, if you’re comfortable telling me, what was your response to the election internally? How would you gauge whether there was a sense that other factors were at play in that election?

Overall, I think people were shocked. Silicon Valley, like New York, is a bit of a bubble, and I think a lot of people were surprised; that the assumption was Hillary’s going to win. That’s what most of the predictions said. While we knew of this activity, I think the assumption was Hillary will win, and it can be taken care of later. As far as the fake news component, there was a number of battles around fake news. That generally is not my job. I don’t work on content policy. But the investigation after the election was my responsibility because our team investigates those groups that are acting in a coordinated manner to cause harm, whether that’s a terrorist group or whether it’s a government group. We took the lead on a group from multiple parts of the company to work on this.

How do you even begin something like that? When you begin to look into the fake news issue and a coordinated effort to do that, … is that an order from Mark Zuckerberg or Sheryl Sandberg? What’s the impetus to do it?

After the election, we put together a report of everything we knew about Russian activity during the election, everything that we had seen as well as all of the external activity, and we delivered that to a number of executives. The outcome of that from Mark and other top executives was the order to go figure out what is the scope and scale of fake news as well as specifically what component of that might have a Russian part in its origin.

The way that works is you first have to look at the billions of pieces of content that were created during that time and narrow it down to politically divisive content, so there was the creation of what we call a classifier, which is basically a piece of software that looks and tries to say, does this look like a politically divisive topic? That doesn’t mean it’s fake or not; just means is this something that’s meant to rile people up on a political field. We created a constellation of all that content and then built a system that looked at who posted that content and were there any indications of them working together on the back end. The goal here was to find clusters of activity where perhaps people were pushing this content in an inauthentic manner.

Once we had clusters of “Here’s a group of people working in coordination,” we would dive into can we figure out what their identity is. The result of that is for the vast, vast majority, we were able to at least trace it to a location almost always outside of Russia, and we could find signs that they were making money in some way, usually by landing people on sites that were running lots of expensive ads or doing ad fraud. But then we had a couple of those clusters that we could not identify what their goal was and that it possibly could have been motivated by the government. But in any case, what we could learn from that is this is what it looks like when a group of 20, 30, 40 people are working together to push a specific political message in an inauthentic manner, and that’s what we used then to start to build protections. Even in cases where we didn’t know who was doing it, we could look at the activity and try to build systems to find and stop it.

Were you surprised by the scope of the fake news problem that you discovered?

Not really, because honestly, it’s not that big. There’s a lot of discussion about fake news taking over, but the truth is, you know, like the Internet Research Agency activity is well less than 0.1 percent of all of the news stories around the election, and the financially motivated fake news overall was still actually not a huge chunk. Things that came from the legitimate media, even the highly polarized media, had much larger reach and larger impact.

It was surprising how many groups were doing this financially and how well organized they were, right? We found these situations where you might have one college student who is able to hire 20, 30, 40 people part-time to make fake news for them. So that was surprising, when we dug into these specific situations, how professionalized it had become. There’s also a number of kind of crazy stories of entire towns where a handful of people would figure this out and then enlist all of their neighbors in franchising out the fake news model. That was something that was surprising.

We’ve seen this kind of criminal activity in all kinds of direct rip-off scams, but to do so to push fake news, and the fact that they can make enough money to support these large ecosystems of people, was actually pretty surprising.

Did it reveal the larger vulnerability of Facebook to you, or kind of a larger problem or systemic problem at the time?

I think what it demonstrated is there’s a hard trade-off between the authenticity of people and your ability to prevent this kind of disinformation. Traditionally when we’ve talked about enforcing authenticity rules, it’s been seen as a negative thing, right – all of the corner cases where you shut down somebody’s account because they have a funny name or the issues you create around people who are transitioning their gender – but the truth is, there’s a lot of upside to enforcing authenticity, and that’s one of the things I think the company had lost focus of in the couple of years before the fake news crisis.

So it’s amid the fake news crisis that you’re investigating that you’re finding these groups; you don’t know exactly where they’re coming from. Bring me through the next phases of what you’re learning about, what the Russians actually had done on the platform during the election.

So we already knew about the GRU activity, obviously. We had to learn from public sources about exactly what happened to the DNC [Democratic National Committee] and the email hacks, but it’s pretty clear to us that that was part of a GRU plot. The rest of the Internet Research Agency activity we found in the summer. We had received a number of questions around advertising, and up to that point, all of the advertising we had found we could trace back to financially motivated groups.

They would run relatively cheap Facebook ads, get a click, [and] get you to their site, where they would run much more expensive ads. Even though they were paying Facebook, they were making profit on every single click. But we had four of the clusters we thought might be geopolitical, none of them had run advertising. In the summer – because, you know, we kept on getting questions from members of Congress around advertising, and it seemed very, very specific, and we asked them – I actually personally asked them, “If you have information, it would be great if you guys can share it with us, or if you could go back to the agencies and ask them to declassify so it can be shared.”

That never happened. We never got any help from the government on this. So what we did is we then decided we’re going to look at all advertising and see if we can find any strange patterns in the same way we looked at the news articles. So we built a system to look at all advertising that was at all politically related in the run-up to the 2016 election and then looked for the same kind of clustering of are these ads being run by accounts that have some kind of link, and do any of the accounts in that cluster have any information that might link them to Russian activity, and by kind of a painstaking process of going through thousands and thousands of false positives eventually found this large cluster that had been all working together and that we were able to tie to both advertised and non-paid activity. That’s what then, through a number of other means, we were able to link to the Internet Research Agency of St. Petersburg.

So the tip comes from someone in Congress? Tell me about that, and if you could be as specific as possible, because that’s something that I never heard.

A number of members of Congress who get classified briefings continued to ask us [about] advertising, and our message was it could exist, but we had not found it yet, and we’d love to have any information you have. Then there [were] at least two new stories that had leaks that said there [were] classified reports that said that Russians had run ads on Facebook. But our request to get more data, none of them resulted in any tips, so we had to go back, and our assumption was the government had found something that they weren’t telling us about, and therefore we should try to re-create whatever they did on their own.

What was your reaction to it at the time? When you hear basically this hint from members of Congress that the Russians had advertised on the platform, you think what?

Well, it’s frustrating to be told that there’s something and not get any help to find it, but, you know, I understand that. One of our issues overall is that there’s this massive over-classification of data, and it makes it very difficult for those people who are standing between the intelligence community and us to try to be helpful.

It was our responsibility to find it, so in the long run it was reasonable for us to go look for it. It would have been nice to get those tips a little bit earlier and probably could have been helpful.

When you start discovering the ads, I can only imagine what it’s like – I’m an outsider – but you’re peeling back the onion of an operation that was taking place on Facebook. What’s it like in terms of discovering these things, or is it kind of “Holy s—, we’ve been gamed”? What’s going on internally?

Yeah. I mean, everybody was pretty upset that we hadn’t caught it during the election. It was a very intense time, right? You have people in a war room working 70-, 80-hour weeks, heads down, looking and finding more and more activity, trying to put our arms around this entire cluster so we can kill it all at the same time. Summer of 2017 was pretty intense and disappointing … A number of us were upset that that was not something we were able to find in the run-up to 2016. So yeah, it was upsetting. I – we had stopped our investigation on the U.S. election, because we had really focused on France and Germany, right? Immediately after the U.S. election, we had major European elections all with far-right parties that we suspected would be supported by the Russians running, so our big focus was, how can we take what we learned and apply it to those elections?

But then, to answer the advertising question, we had to turn back to 2016 and restart the investigation in a new manner.

How quickly did you staff up, or was there a scramble inside – you’ve described this war room. What was it like – I’d imagine there was a scramble to figure things out, so what – ?

We enlisted huge parts of the company, right, so teams that work on ad fraud, teams that work on counterterrorism. We kind of dragooned everybody into one big unified team to do these investigations.

But the problem is at some point you lose the advantage of adding more bodies, right? One of the things that really had to happen quickly is that we had to build tools to automate this work, because when you’re looking at billions of dollars of ads, hundreds of millions of pieces of content, no matter how many people you throw at it, you will eventually run out of capability. That’s the other thing that was happening in parallel during this time were people trying to relieve the pressure of the team by building tools that could find these things in a much more automated and automatic fashion.

How Political Manipulation Works

And was it becoming clear to you that there had been something wrong with the standards in place for advertising on Facebook; that there were not strict enough policies in place? Was that part of something that you were thinking about at the time?

Yeah. I mean, I think what it made clear is that you have to have a higher level of standard for anything that’s politically motivated, not just candidate ads. Traditionally in the United States, regulations are applied to ads that name candidates: Vote for Bob. Vote against Sally. Vote for Proposition B.

The vast, vast majority of these ads had nothing to do with a candidate and therefore fell outside election law. I think what it demonstrated was that election law was not the guidepost that we thought it could be. We’d have to go well above and beyond on having rules that are much stricter than what were legally required.

… As you’re discovering this, what are you seeing? What are these ads? Who are they targeting? How are they paid for? What are you thinking when you’re seeing this stuff?

The goal of these ads is not actually to push the message. The goal of the ads is to build an audience to which the message can then eventually be delivered. What the Internet Research Agency wants to do is they want to create the appearance of legitimate social movements that they put themselves at the top of, and they like to do so by drafting behind existing faultlines in U.S. society.

So they would create, for example, a pro-immigration group and an anti-immigration group.

Both of those groups would be almost caricatures of what those two sides think of each other: we should have no borders; we should allow any immigration from anywhere; and then the other side of all immigrants are criminals. They would actually in some cases reference each other and use each other as an example of “Look how horrible these people are on the other side.” Their goal of running ads [was] to find populations of people who are open to those kinds of messages to get them into those groups and then to deliver content on a regular basis to drive them apart.

… We’re talking about something like 3,000 ads, but in the end, we were able to find about 80,000 pieces of organic or unpaid content. The ads are the tip of the spear, but the real goal is to deliver those 80,000 messages, pushing people to the political extremes.

As you’re seeing this – I’m just curious – experientially, right?

Yeah.

You’re discovering what we all now know much more about, but you’re there kind of in the trenches discovering it.

Right.

And your team. Bring me into that as best as you can as you’re starting to see what this campaign looked like.

One of the surprising things was how they were trying to take over communities on both sides, right? I was expecting that you would see them focus just on right-wing populist groups, but some of the largest targeting groups for advertising were actually African Americans living in large cities, and it’s because they were pushing both pro-police groups as well as fake Black Lives Matter groups, and they were trying to build the audience for the second one so they can deliver really divisive messaging. That was actually kind of shocking to me, because I did not – you know, when people talk about fake news, they think it’s just about pushing one candidate or one position. But really what the Russians are trying to do is to find these faultlines and amplify them and to make Americans not trust each other and to reduce the ability of people to interact positively online. Obviously they’d been pretty successful not just through the direct campaign, but we now live in a world where if anybody says something you don’t agree with, you automatically call them a Russian bot, right? And I think the way we’ve reacted to the situation as a society has actually amplified their capability. Either it is a Russian bot and they’re fooling us into thinking – or most likely it’s not, but now that we see Russian bots behind every curtain, it becomes very, very difficult to actually engage with somebody and understand where they’re coming from.

The thought that went into that was actually really shocking to me. It’s really quite both ingenious and evil to attack a democratic society in that manner.

What about your reflections at the time about what this tool that Facebook had created [can do], right – the ability to micro-target, the ability to segment up our society to some degree, and that in some ways did the Russians use Facebook in a way that Facebook was designed to be used?

One of the saddest parts was it became clear that they were using the exact tools that people were using for positive movements in the Arab Spring and the other democracy movements, building audiences, creating private groups, pushing messages to be amplified. Those are the kind of activities we saw in 2010, 2011 that were positive, and they took those exact kind of ideas and they turned it against us. So yeah, it was sad to see that a product that you know brings a lot of people joy and that can be used in a lot of positive ways to be subverted that way.

Changing The Culture Of Silicon Valley

One of the things you’ve written and talked about in speeches is kind of a culture here in Silicon Valley and a culture of, you know, there’s a very strong dogma at Facebook of making the world more open and connected and seeing the social good of this tool that they’ve created. Were you sort of an outlier to some degree inside the company of being more of a realist as opposed to an optimist? And was this proving something to you about what this tool really is?

Yeah. I mean, my entire job was to be paranoid and to kind of wallow in human misery, right? You think about the teams that I supervised. There’s a team specifically focused on child sexual abuse, a team focused on extremism and counterterrorism, a team focused on people who are being ripped off from their money. So it sometimes makes you hard to calibrate, because if you spend all day dealing with the worst of humanity, it’s easy to believe that everything is bad and that you can’t make a product that has any positive impact. But I think we do have the problem in Silicon Valley on the other side that when we build products here, we think about how we are going to use them, how our family members are going to use them, how people are going to use these products in a positive way.

And most people are good, right? Despite what I do all day, I still believe most people are good. But there’s enough bad people, and especially enough organized, well-paid professionals who want to cause harm, that whenever you build anything in Silicon Valley, you have to think about how those people are going to act. One of the problems we have in the valley, something that’s both a strength and a weakness in the valley, is our lack of institutional memory. Young kids coming out of Stanford right now don’t think about all the companies that have failed before. That’s what gives them the courage to go try something that sounds crazy. But the flipside is they don’t have any memory of all the ways technology has failed society.

And I think that’s something we’ve got to fix. It’s something that we’ve got to fix inside of companies, but it’s something we’ve got to fix in the bigger picture; that we’ve got to talk more openly about these failures so that if we’re going to make mistakes, we should make totally new ones. As a valley, we should not make these same mistakes over and over again.

But did you feel listened to as much as you should have been in terms of, if you’re thinking about the harms to users, and others are focused on the good that the platform is doing and growth, did you feel like there was appropriate attention being given to the harms that you thought could result?

I think there was a structural problem here in that the people who were dealing with the downsides were all working together over kind of in the corner, right, so you had the safety and security teams, tight-knit teams that deal with all the bad outcomes, and we didn’t really have a relationship with the people who are actually designing the product.

You did not have a relationship?

Not like we should have, right? It became clear – one of the things that became very clear after the election was that the problems that we knew about and were dealing with before were not making it back into how these products are designed and implemented.

Something that Facebook did that was overdue but was the smart move is a lot of this responsibility is now the responsibility of the product team, so the same people who are told, “Build a product that a lot of people use and a lot of people like to use,” are also being told, “You’re responsible for the downsides.” That’s a change that happened too late, but I’m glad it happened, and it’s something I think we need to see more companies do.

There’s actually an interesting historical parallel here. In the early 2000s, Bill Gates wrote this famous memo at Microsoft around “Trustworthy Computing,” about how Microsoft had all these security problems, and security had to become their prime focus. One of the big changes they made was they made every individual engineer responsible for the security of their own code. That’s where we’ve got to get now on these safety and trust issues, is every product manager, every engineer has to be thinking adversarially, and they need to be educated in [the idea that] these are all the bad things that have happened before, and you’ve got to break them of the model of thinking about how they want to use a product and to think about all the negative ways it possibly could be used, which is very different. It’s just not how we train people in college. It’s not how you get trained at companies. It’s not, you know, the companies are all really happy–go-lucky places – the free food, the social events, the open internal culture – and it’s sometimes a difficult place to be a pessimist, right? Somehow we’re going to have to build that pessimism into more people without losing the positive parts of the culture that make people think big.

As you’re realizing what had happened during the Russia interference and the extent of it and the details of it, what was it like bringing that news to others in the company and up to Mark and Sheryl, for instance?

I think the interesting part of that is actually before, right? In November and December of 2016, this was one of –  you know, we had a big responsibility in the security team to educate the right people about what had happened without being kind of overly dramatic. It’s kind of hard as a security person to balance that, right? Everything seems like an emergency to you, but in this case it really was, right? This really was a situation in which we saw the tip of this iceberg, and we knew there was some kind of iceberg beneath it. I think the fact that it took us until then is one of the things that I learned too late, is that that responsibility should have been something that was much closer to the product decision-making process.

You shouldn’t have to actually go to the top executives on these issues. This should be a day-to-day experience, and the fact that we had to kind of ring the bell at the highest level to get attention was the kind of organizational failing that had to be corrected later.

It sounds like you’re referring to a specific meeting or some specific event that happened in just November, December, and if you could just bring me into something there of what you’re referring to here…

We had a series of meetings in early December, especially when we were talking about what we had found about Russian interference, and it came as a big surprise to the leadership on the product team, and that just indicated to me that we had not done our job to enroll the right people, because the people that decide what the product is, they’re the ones who actually have the power. We can go find a couple of bad guys, shut them down; we can try to build machine-learning tools to find activity, but one of the things I’ve learned at Facebook is that little changes in how the product works overall can have massive downstream effects that you can’t fix through little actions like that, right?

The reality is how the product works. It is not the operational component of people deciding which accounts to shut down. You know, that was one of the mistakes we made, was not involving those people much earlier to understand that problem. The other problem we had is that the fake news problem that was being worked on through 2016 was not connected to the Russian problem, right? Those were being dealt with by two totally different groups as two totally different kind of issues. Fake news is mostly being dealt with as kind of a quality issue, not as an adversarial issue, and Russia was being dealt with as a cybersecurity issue, not as a cybersecurity-plus-content-quality issue. Those two worlds just until this point had not worked together.

And when you say “the product,” what product are you referring to? When you say the product designed – no one actually thought through these things in terms of designing the product, what product is it?

I mean, the components of Facebook that are most relevant to this issue are the News Feed and the algorithm that powers what people see and the advertising platform.

The Problem With News Feed And Ads

OK. So what is it about the News Feed that you were discovering? Something had not been thought through properly, so what was it that hadn’t been thought through?

Something that all the fake news purveyors, including the Internet Research Agency, figured out is that it’s very powerful to get people to re-share content. If you look at the concentric circles, here you have a couple of million people who wanted to see the IRA content. They didn’t know it was coming from Russia, but they intentionally signed up to be in this anti-immigration group or this anti-police brutality group.

But then those people were so activated on those topics that they re-shared and re-shared the content constantly. From that small group of a couple million people, you end up with 135 million people seeing it. This is kind of the crazy Uncle Sal problem, right? Everybody’s got that uncle on Facebook who shared some kind of really outlandish content, and the News Feed algorithm, because you were friends with that person and you interact with that person, was making that something that would come up a lot in people’s feeds, and that was something that was being exploited by both the financially motivated fake news and by the government actors.

But it was designed that way for a reason, right? It was designed in order to keep people engaged with News Feed. So – were you kind of pointing out to these product people that there’s an inherent flaw in their design?

Yeah. I mean, I can’t take credit for – I think a lot of people came to the same conclusion of looking at how did this – one of the things that happened is there’s a lot of studies of how did this one piece of fake news make it into people’s news feeds, and the fact that there was this amplification factor, and then that was exploiting how the algorithm worked, was one of the issues that a number of people came up. I’m not going to claim that I was some kind of magic genie here that I figured it out before anybody else.

But when you go to the product design team or you go to the product team and are like, ”Look, this is what’s happening here; this was a downside of something you designed,” what’s the response you get initially at that point in time?

Well, after the election, the product teams were pretty shocked, and there was a broad recognition that something wrong had happened. The real question is, where did it come from? Was it just spam, or was there some more malicious actor behind the curtain? So there wasn’t really pushback from product on “This is something we should care about at that that point.”

And what about ads? You mentioned ads as well, I mean, as News Feed. What was it that you were recognizing about the vulnerabilities or problems with the advertising platform?

I think there are two issues with how the ad platform was used in 2016. The first is the micro-targeting. This is something that used to be celebrated. In 2012 the Obama campaign invented modern targeting on online advertising, of coming up with small segments of people and giving them a message that’s specific for them.

The Trump campaign took that to the next level, while it seems the Hillary campaign perhaps regressed a bit and were not using the same kind of techniques that were being used in 2012. The micro-targeting I think is a significant issue, and one of the things I think needs to happen here is we need to have rules around how finely you can micro-target people. Even without Russians involved, I don’t think it’s an appropriate thing for candidates to cut up the electorate into tiny, tiny little pieces.

The second was the authenticity issue, was that – you know, one of the really popular components of Facebook advertising is that anybody can do it, right? If you’re a small-business owner, you don’t have to have a relationship with an advertising firm; you don’t have to have a Saatchi & Saatchi account to run your own ads. That’s why there are millions and millions of people that run ads, but that also means that the vast majority of advertisers on Facebook have no real relationship with the company. They have a Facebook account; they put in a credit card; they never talk to a human being. That’s how these ads are run, is they were run as kind of self-run ads. The amount of money never hit a point where any human being would talk to them.

And I think the other lesson that comes out of all of this is you can’t provide that kind of advertising capability, at least around political topics, in an anonymous fashion. You can’t just let anybody with a credit card run political ads. That will be misused in one way or another – not necessarily illegally. In the vast majority of jurisdictions, it is totally legal for somebody to run ads in that country on a political topic, but practically, when you lower the barrier to entry and you allow anybody to do it versus the barriers that might exist in a television ad or a newspaper ad, it means it makes it way too easy for bad guys to manipulate it.

Assessing Facebook’s Response

In some ways it seems like there were warning signs along the way in the trajectory of the company, whether it is about the algorithm and engagement-driven algorithms or the potential for misinformation or disinformation to spread or to targeting mechanisms of specific populations. Was the writing on the wall for a lot of these issues? Were warnings kind of unheeded about it? I’m just kind of curious about how it is that you get to the place where it takes something like Russian interference in the election for all of these to all of a sudden dawn on this company. It’s a larger question, but I’m kind of curious what your perspective is on that.

It’s not like there’s nobody working on these issues, and I think that’s one of the misnomers. The company has been dealing with the negative side effects of its product for years, right? The first area was around child safety and then around fraud and impersonation. Before the election, the big focus was actually on the use of the product by ISIS, right? When you have 2 billion people on a communication platform, there is an infinite number of potentially bad things that can happen. The tough part is trying to decide where you’re going to put your focus.

I think that in all of these safety and security issues, we’re always trying to play catch-up to the last issue that we dealt with, which is one of my concerns around 2020 is that we’re so focused on exactly what the Russians did in 2016 that I’m not sure we’re going to be able to move in a predictive fashion. There were teams working on these issues, but they were all kind of disconnected and seen as spot problems, and none of them rose to the level of kind of rethinking overall, how does authenticity work? How does advertising work? I think the Russia issue is just large enough and pervasive enough to trigger that reckoning.

From the outside it appeared as though there was a slow roll, a recalcitrance and a lack of leveling with the public about what the problem had really been and how big the scope was. … How do you respond to that, the idea that you were really slow even once this was detected to come out with what actually happened?

I think there’s a difference between the work that’s being done to stop this activity and the comm strategy of how do you talk about it. The truth is, is Facebook has been more transparent than any company or any part of the government on this issue. None of this research was legally required to happen. It all happened because we cared about it, that there were groups of people who really care deeply on these issues. We dug into them, and then we voluntarily released data about them, which if you look at now what’s happened, maybe that was a mistake, right? There’s a number of companies that are laying very low who have never said anything about any of this, who have gotten much less pushback.

I’m a little afraid of if you are starting a Silicon Valley company right now, and you’re looking at what’s happened to Facebook, then the lesson you’re going to take is you should never talk about this. But, you know, [when] we found not enough but activity during 2016, we proactively shared it with the government; we spun up a huge team and studied these issues in the winter of 2016, 2017. We published one of the first and still only papers on this issue in April of 2017, where we talked about the classes of disinformation and what we’re going to try to do around different classes of disinformation.

In the summer of 2017 we published more. Yes, I think one of the problems has been this iterative process of not having everything all at once, and I don’t think that has been a good strategy. But the truth is, you’ve got these companies now acting in this quasi-governmental way, and the standard used to be, you tell law enforcement, and then it is the responsibility of the government to make these disclosures or to decide what to do. What you’ve seen now is Facebook and everybody else realize you can’t wait for that anymore; you’re going to have to have a direct relationship with the populace, and you’re going to have to talk about these things in a more open manner.

And that, unfortunately – I mean that was a process that played out, and everybody got to watch, right? I understand if you’re on the outside it seems – I can understand why people feel like the company hasn’t been forthright, because it is very difficult to have these back–and-forths about here’s one number; here’s the next number; here’s the next number. But part of the problem is there’s no real standard for this, right? Nobody has ever had to do a blog post of “Here’s our understanding of how an election was tampered with through a disinformation attack,” so there’s no good standard of what numbers do you use, and how do you measure certain things, and what level of certainty do you have.

There’s also some really interesting and difficult legal questions here about what data can be shared with what people. We haven’t talked about it yet, but I’m sure the terms “Cambridge Analytica” are going to pop up, right? One of the issues you see here is there’s a valued openness, but this content is protected by both U.S. and in this case Irish law, and in the future will be protected by GDPR [General Data Protection Regulation], and actually, there’s some interesting unanswered legal questions about what can you just dump out to the public, what can you give to Congress, what can you give to law enforcement under lawful process.

The other issue is the focus has always been on the relationship with law enforcement, and it turns out that that relationship is perhaps less important than the relationship with the broader public.

In April you’d come out with a report, and it didn’t mention the Russians. Why was that?

There’s a hard question about in what situations do you provide attribution of an actor and whose responsibility is it to do that. There was an internal discussion about should we name Russia directly, and where it ended up happening was that we referenced the DNI report, saying that our results were compatible with what the DNI had said at that point. The Director of National Intelligence had specifically pointed at Russia and GRU.

I’m a little torn here, because it was a big fight for us to do anything and to kind of put ourselves in the line of fire, and it’s a difficult thing for a private company to point at an incredibly powerful nation-state, standing kind of by themselves. From our perspective on the security team, the goal was to inform the conversation and to start to educate the rest of the tech community of “This is what a disinformation attack looks like,” which if you look at our paper, we got [it] totally correct. Everything we’ve predicted in there has been supported then by the special counsel indictments and other information that’s come out.

So I’m glad we put it out. It would have been great if we had put more detail in, but realistically, in a situation where we were breaking ourselves into jail and pointing the finger at ourselves…  – “This is something that happened on our platform“ – I think that was realistically what you’re going to get out of a big company where you have lots of different equities being balanced.

Would you have liked to have been more forthright in that report?

I’m not going to claim any kind of special privileges here. We went and we briefed Congress, and we were extremely honest with them about our level of attribution and the fact that we believe that public attribution was more appropriately done by the government. So we briefed Congress; we briefed the FBI; eventually, when it was appropriate, we briefed the special counsel. That’s who we were focused on of giving as much information as possible.

Honestly, do you think it would have changed anything? I mean, we said this was activity compatible with the attribution of the DNI. If we had had the word “Russia” in there, it was as bright of an arrow as possible without using the word, and there’s all this media focus on that one word, and it’s a little bit frustrating, because again, nobody else including the government has put out a report as detailed as that, right? That was something that a private company kind of had to stand up and act on its own in a situation where the government was not, and that was a pretty big step to kind of put ourselves out there of saying, “This is stuff that we’ve seen; it is compatible with it; these are the things we’re doing.”

Sitting where we are today, can you confidently say that we the public know what happened exactly on Facebook during the 2016 election when it comes to Russian interference?

Well, that’s a tough question, because the company’s been forthright about what it knows. There are two questions: Has everything been found? And the answer is probably no, right? There will always be the possibility that there are very high-end adversaries who have perfect operational security who have not been caught.

I think the second issue is the problem is, there’s probably a lot of activity that is not direct activity on Facebook that we don’t know about. We don’t know what groups the Russians were supporting; we don’t know if there’s any PACs or NGOs or nonprofits that are running ads that the money came from Russia. So I think we probably have our arms around the content that was directly run by these Russian actors. The question is, is all the stuff that’s one step removed – and I think that’s still this very open question. … There’s been a little bit has come out from the special counsel, but I’m not sure anybody’s actually answering that question.

Trade-Offs And Regulation

I just would like to know some of the trade-offs that you see in terms of asking Facebook to take on more responsibility [in 2018]? What are the main ones?

One of the big trade-offs is around content moderation. When you ask one of these platforms to decide what is acceptable speech, there will always be mistakes made. You can’t realistically ask [or] enforce your policies perfectly. So the more we turn up that content moderation is what falls into the lines of hate speech, of what is unacceptable political discourse, you’re going to end up with more and more fringe speakers getting caught in that net. I think another major trade-off is how much power do you want to give these companies to decide what is legitimate political speech.

It is very attractive to put all the responsibility on the platforms for a short-term problem, but we’ve got to be careful not to create a longer-term problem where we are asking hugely powerful corporations without democratic accountability to decide what kind of political conversation is allowed online. There’s another difficult trade-off between privacy and safety. Most of the work that was done to find the Russian activity required a great deal of data about who these accounts are and what they did.

If the companies are required under GDPR or any future U.S. legislation to know much less about their customers, then their ability to find these bad guys is going to be greatly reduced. Now, that might be a reasonable trade-off, but it’s one that people aren’t discussing right now, that there really is a trade-off inherent in some of the privacy requirements especially around data collected from mobile devices.

… It essentially makes it harder for you to find people if there’s higher privacy requirements?

Well, especially – it makes it especially hard to look retroactively. A bunch of the provisions in GDPR are being interpreted to say you have to throw away data after 10 days, 20 days, 30 days. That means if you don’t start an investigation right after something happens, like an election, then that data might be gone to be able to retroactively do – so the work we did in the summer of 2017 to find ads that were run in summer of 2016, if that ad data had been thrown away, there’s no way we could find the Russian activity.

That’s not to say I’m against GDPR, but there is a hard trade-off here, and I don’t think European citizens understand the fact that there’s going to be a conflict here.

Is Facebook Too Powerful?

You’ve brought up the tremendous power of these companies like Facebook. Do you think Facebook is too powerful as it is?

I think we’re naturally going to end up with very large communication platforms because people want to be on a platform where all their friends are. People want to be on the platform where any random stranger, if they meet them in real life, they’ll be able to communicate. So you’re always going to have this accumulation of power into a small number of companies. That’s why I think it’s really important for us to set the norms around these are the things they’re responsible for, and these are the things they aren’t. So far, very few people are saying there’s anything in that second category, and I think that’s a really dangerous direction to go.

If you make the platforms responsible for every bad thing that happens, then they will accumulate the power to stop those bad things, and we might come to regret the amount of power we’ve asked them to take upon themselves.

Do you think that Facebook should be held liable, for instance, if there is election interference or that foreign actors are interfering with elections on their platform?

Yeah. The liability issue is a very complicated one, and I’m not a lawyer. I don’t feel so comfortable. I think the platforms have a responsibility. I think there are ways to enforce that without kind of strict legal liability. But I am not legally educated enough to come up with any good answers there.

Do you have faith, though, that, for instance, without a legal penalty for Facebook, without a legal penalty or some greater incentive, that Facebook, left to its own devices, is not necessarily going to invest as much as it needs to to find bad actors, to address these problems that could easily just be a blip on the radar for a while and then kind of back to business as usual?

I think users have to hold the platforms accountable. I’m a little afraid when you talk about penalties, [when] you’re talking about powers you’re giving the government, about whose hands are going to be on those levers. We’ve got to be real careful about creating the ability to control communication platforms that future potential demagogues are going to be better at manipulating than perhaps the current president. I do think they have a responsibility. I am afraid that there will be a return to the mean and the companies will be focused more on what Wall Street thinks and that it’s that kind of short-term thinking that can be really, really dangerous. The people in charge right now I think are really concerned about their personal legacies, and that’s not a bad motivation, right, wanting your kids to be proud about what you built. But the vast majority of companies in the world are motivated by short-term profit motivation, and I am a little afraid of a return to mean of the tech companies becoming more like any other company, about being run to make Wall Street happy on a quarterly basis.

How do you keep the pressure up I’m not totally sure, because again, I’m worried about both the levers you create in the United States and the precedent you set for all of those countries where their leverage over the tech companies are not going to be used for positive pressure. They’re going to be used for really oppressive means.

There’s a huge level of trust, though, that we have to have in a private company, right? We have to take Facebook’s word for it that their systems are going to work to try to protect the electorate or voters or users from disinformation campaigns, and we have to take the word of Facebook that there were 23 accounts versus hundreds of accounts that they just haven’t – I mean, isn’t there something problematic about the fact that we’re in a position where we have to take Facebook’s word for it at this point in time, that we just have to trust Facebook?

Yeah. I think one of the things these tech companies have to do is to find a way to share data with academic groups who can then verify the statements they’re making and make sure they’re not missing anything. That got a lot harder this year after the reaction to Cambridge Analytica, which is something that started with an academic relationship, but I hope that doesn’t slow people down. There’s been some movement in this direction. There’s a group called Social Science One that’s trying to coordinate what are the rules and policies around data access by academics. We need to go further, though. There needs to be a kind of direct access by trusted academic groups with strict privacy policies so that they can both verify the statements that are being made but then also spot activity that might have not been seen. It’s a big planet, and there’s a lot of bad things that can happen, and we need to find a way to get more eyeballs on these issues while also respecting the privacy of people who use the platforms.

That is again one of those hard trade-offs. There’s a fundamental trade-off in sharing data with outside groups to try to enforce these rules and define bad actors and data privacy, and that is, there are some technological solutions there, but for the most part we’re going to have to decide whether or not the possibility of another data leak is so bad that we don’t want to allow academic groups to do that work.

So are you basically saying, though, that eventually there should be independent auditing of companies like Facebook?

“Audit” implies some kind of legal standard. I think the government can do a lot more to encourage the ability of external groups to have unfettered access to data, right? By creating liability shields and then also by – I think there are sticks you can have to say you’re going to allow people to look over your shoulder to verify these things.

Do You Trust Facebook?

Do you think Facebook has earned the trust to be able to say, “Trust us; we’ve got this”?

I’m not going to answer that. I’m sorry – that’s just – everybody can make that decision for themselves.

But do you trust them?

I trust the people who I worked with. I think there are some good people who are working on this. That doesn’t mean I don’t think we should pass laws to back that up, right?

We need to have standards around advertising; we need to have standards around transparency in political campaigns; that, whether you trust the people or not, we need to have some kind of baseline, because the truth is also – we haven’t talked about [this] at all – there are thousands and thousands of companies in the advertising ecosystem. There are thousands of companies running user-generated content sites. You’ve only used the word “Facebook” this entire time, but there are at least two other massive tech companies here in Silicon Valley that have really big Russian disinformation problems, and then again hundreds of companies who are not going to have the resources that the big guys have, and coming up with a legal standard that applies to all of them I think is going to be critical.

One of the questions we have, and I’m certainly going to pose to the folks that are there now, but are these solvable problems? When you’ve got 2.2 billion people on a platform in almost every country on earth, when it comes to election interference or disinformation or all the sorts of problems that come up, is this an intractable issue here?

If you’re specifically talking about disinformation, I don’t think it’s a completely solvable problem, no. I think you can make it hard enough that actors have to leave a lot of breadcrumbs when they run these campaigns, and then hopefully that gets picked up either by the companies or by governments, but in the end, in any situation where you give people speech, you will have an opportunity for misinformation. I don’t think that’s honestly a reasonable standard, right, just like our standard isn’t, do all journalists have to get every fact right, therefore the First Amendment exists, right?

I think there’s an expectation that you can cure all of humanity’s ills through the application of technology, and I think that’s again both wrong and a very dangerous direction to go. The goal is not to stop all this stuff; it’s to drive it down to the point of where it’s not relevant and/or the downsides are worth the upsides of giving more people voice.

Should Facebook have representatives in every country that are dealing with local issues or dealing with all the issues that arise?

That’s a great question. I think Facebook needs to have more people in more countries. I think there’s a number of countries in which you would never be safe to have employees who could then be used as leverage by those governments to get what they want. That’s a hard trade-off here. In some of the places where there’s really hard, bad things happening, it’s because the government is involved, so putting people in the country probably makes things worse, because you’ve created a significant point of leverage that they now have over the company.

Election Security

… But referencing those things that you were saying about the midterms that we didn’t get to, what’s the most important – what question do I need to ask you that you’re going to say something that’s of vital importance about our preparedness?

I wouldn’t say anything I said that’s of vital import

No, seriously.

This is stuff I covered in the Lawfare article. If the GRU pulled the same playbook in 2018, if right now WikiLeaks came out with the email inboxes of the five most vulnerable Democratic Senate candidates, nothing would be different in 2018 than it was in 2016, and I think we’ve got to start to think about when that happens, because that is very much a possibility if not in the midterms, in the presidential election.

Meaning what? What’s the possibility?

I think there’s a very good chance that if you could get access to – there’s such a focus on things that are leaked or secret in the media. If you have access to that data, you can control the entire media environment. So if you’re the first one to release an email dump, you can create the lens in which that is viewed, and then the thousands of downstream articles that then can be amplified by your bots and by your troll armies online, and I don’t think any of that has changed.

There’s some better security on behalf of some of the parties and candidates, but for the most part, there’s been no real discussion of the way that we all got played in 2016, which is the GRU wanted us to talk about very specific stories, and that’s what we did ad nauseam for a month – dozens and dozens of newspaper articles, 24/7 cable news coverage, lots and lots of articles on social media, and that is a whole–of-society issue that, whether Facebook gets rid of the Internet Research Agency or not, doesn’t change at all.

Of all the things that happened in 2016, the better targeting by the Trump campaign was effective, and I think the GRU activity was the most effective. The IRA activity, while having a large footprint, was also very, very dispersed, and again, their goal was not to elect one candidate; it was to drive these divisions. I think it’s difficult to make the argument that swung the election versus the very specific ideas that the GRU was able to put in people’s heads about Hillary and her campaign based upon the information that they stole.

What’s your worst-case scenario for these midterms … in terms of where you think the Russians or other players might go?

My biggest concern is an attack against the certainty of the election. The 2016 electoral map was this finely balanced, very strange situation with these two highly unpopular candidates and a very mixed-up electoral college map. In any election, you can attack the certainty of the winner, and, you know, I think in the United Stated we take for granted years after years of the peaceful transfer of power, but you can imagine a situation in which direct attacks against tabulation machines, against voter rules, denial of service attacks, making it difficult for people to vote on the day of, combined with an online disinformation campaign to rile people up to believe the other side is stealing it.

If they opened up that crack a little bit, the lawyers for the political parties would put the crowbars in and rip it wide open. That’s my biggest concern in both 2018 and 2020 is that even if our adversaries can’t find a specific candidate they want to back, what they can do is make it so that the day after the election, America is even more divided, and half the country believes it was stolen. Turning every presidential election into Bush v. Gore is, I don’t think, an impossibility right now.

What can social media companies like Facebook do to prevent that?

I think that the social media companies have to be extremely aware on what is going on from a disinformation perspective on the day of the election. A lot of the things we’ve been talking about have happened days or weeks afterward, and this is going to have to be – they’re going to have to shut down rumors, misleading stories, doctored images. Those things are going to have to be spotted and shut down on that day, because if we end Election Day with people believing that these ballots were stuffed or that this was the real vote total, then we might never be able to reel those people back in.

And you actually think Facebook has the ability to shut those sorts of stories down?

I think they have the ability to stop the viral spread. The question is, are the systems in place to allow those things to be reported, and then who are the nonpartisan partners that you can rely upon? The problem is there’s not a lot of disinterested people in these situations, so you have to both be open to people pointing out disinformation while not getting played, and that’s another one of these tough balances. It is important to allow people to report bad things that happen, but then that also opens the door to coordinated reporting to silence people that you don’t like. I think building those relationships with the trusted parties, that’s probably the direction you have to go there, and then just having the staff on hand to be able to understand and the decision-making process internally to move very, very quickly, to not spend hours and hours agonizing over a content decision that might have to be made in minutes.

Made in minutes, but in real time, on Election Day is what you’re saying.

Yeah. Yes. That’s going to be tough, but that’s my biggest fear is that kind of disinformation attack, if you combine it with an actual technical attack, you don’t have to be able to change the real vote totals to insert, to disrupt. It’s much easier to disrupt the counting process, to disrupt the voter rules, than to change the vote in a way that is permanent and unfixable, which is possible in some states because they don’t have paper backups, but it’s still a difficult thing to do at scale, whereas almost every county I think, there is some kind of vulnerability that would allow you to insert some chaos into the process.

We’re just talking about the United States here, but how in the world would Facebook ever have the capacity to do real-time Election Day monitoring for every country in their elections?

Right. I think what the company has to do is it has to triage the highest-risk elections, either because those elections are posed to have a situation where it might cause physical violence, or there are parties that might be backed by foreign adversaries. That’s something actually I’d like to see happen externally is a public discussion of these are the riskiest areas and the risk factors that lead into them, and we need kind of – I think this is a place where academia can help, where you can have a model that is well-accepted of “These are the countries that need to be focused on.” You can’t handle – there’s something like 50-some major elections per year. You can’t handle them all at the same level; there has to be some kind of triage, and that will be a difficult choice for the companies to make. Nobody wants to be told that their election isn’t important, right?

It’s just a crazy responsibility to put on a private company.

Yeah, it is. But this is the flipside of allowing people to have voice, right, is that you’re going to have to put some of these responsibilities on the companies, and some responsibilities we’re as a society going to have to say we don’t want them taking on because it means a granting of power that isn’t appropriate. So stopping disinformation, yes, shaping the overall conversation around an election, that’s the kind of interference that I think needs not to happen.