The Facebook Dilemma | Interview Of Clint Watts: Foreign Policy Research Institute

The Facebook Dilemma | Interview Of Clint Watts: Foreign Policy Research Institute
The Facebook Dilemma | Interview Of Clint Watts: Foreign Policy Research Institute

Clint Watts is a distinguished research fellow at the Foreign Policy Research Institute and a senior fellow at the Center for Cyber and Homeland Security at The George Washington University. He is also a national security contributor for NBC News and MSNBC.

This is the transcript of an interview with Frontline’s James Jacoby conducted on August 13, 2018. It has been edited in parts for clarity and length.

So can you just give me a little sense of what your background was getting into this research and where you’d been working and who you were working for?

Starting by 2013, I was studying social media influence mostly around terrorists, predominantly the Islamic State at the time. I had studied that. Many previous years I’d worked either at the FBI in counterterrorism, so I knew about it from there, or the Special Operations Command. They sometimes would fund projects when I was either at the Combating Terrorism Center at West Point or outside it to assess what was the influence of social media with extremists.

And that was a nascent field? That was kind of … There were just, I’d imagine, very few of you doing this type of work?

Yeah. There was probably a dozen of us. We’d work on different projects. We would team up, depending on what it was. I would do assessments for the Foreign Policy Research Institute about measuring the influence of social media and how you could see ISIS, at the time, overtake Al Qaeda. That would’ve been in ’13 and ’14. So I was doing it then, but it dates all the way back to the mid 2000s when I was at the Combating Terrorism Center at West Point.

And with the FBI, for instance, how much visibility does the FBI have in terms of the social media activity of people who are here in the United States?

Zero, I think, for the most part. I mean, I don’t really know. Just to be clear, the FBI did not work on social media and extremism. I was in terms of programs, training, intelligence reform, those sorts of things.

OK. And what was it … If you can bring me through the story of what sort of started to catch your eye on the Russia front as you’re looking at ISIS.

In early 2014, I’d written some articles about the influence that was happening with the Islamic State online and different options. We were talking about soft power or diplomacy and how we might use it if we weren’t going to send troops into Syria and Iraq. And one of the articles I wrote was: Could we deal with a group called Ahrar al-Sham? If we weren’t actually going to engage them or engage Al Qaeda, could we do that in Syria? And when I did that, I started to see accounts emerge that didn’t look like normal extremists or counterextremist accounts. They were very much pushing a Russian line, and they were pushing it in a way that [suggested that] I was some sort of extremist supporter, that I didn’t understand the threat of ISIS. And those accounts were different from what I had seen before. It wasn’t the usual kind of content that I would encounter.

And where was this when you say accounts? What kind of accounts and who were these people who were emerging?

Right. The accounts that were emerging were on Twitter. That’s predominantly where I would put out any articles that I was writing and that’s where you would see these discussions. And in those discussions, the Twitter accounts were overlapping with accounts that were associated with the Syrian Electronic Army at the time. So in the spring of 2014, the Syrian Electronic Army was a hacker group that seemed to be very pro-Assad regime, but I wasn’t really sure if they were connected with the Syrian government or who they really were working with. The Syrian Electronic Army was hacking into lots of accounts in the United States and businesses. They would hit corporations and they’d even hit the media, the Twitter account of the Associated Press. And with these hacks they were doing a few different things. One, they would do maybe a website defacement or they would spill a database or some hacked information out there, but in that case they actually changed a story. They made a story that the White House essentially had a crisis, which caused the stock market to drop.

So when watching these Twitter accounts, two of my colleagues that I worked with for a long time on social media influence, Andrew Weisburd and J.M. Berger, we noticed this patter that was emerging and it looked very different from anything we had seen in terms of extremists, whether it’s ISIS or Al Qaeda.

Russian Influence Campaigns

And what is an influence campaign? I mean, what were you actually studying and what was your interest about? I mean, the ability on social media. Why social media? Is that the place that you were looking at?

Yeah. Social media had become the gateway for the Islamic State, and Al Qaeda before it, to connect with, radicalize, recruit and bring in, indoctrinate essentially, people online either into a battlefield like Syria and Iraq or – back during the Anwar al-Awlaki days – to do jihad at home.

So you would see them try and connect with people so they could execute attacks worldwide and also reach a larger audience to recruit from. That was an example of an influence campaign. But what you start [to see] in 2014 and beyond were actual influence campaigns to shift public opinion. One of them might be Anonymous, for example, or LulzSec. These were hacking groups that would hack into different organizations and dump their secrets out on the internet to try and influence people’s perceptions. Islamic State was another version. But then you start to see states come online and that’s where the Russian state-sponsored efforts started to emerge.

What were the early signs of the Russian influence campaigns that you were seeing?

There’s several things to look for in terms of Russian influence campaigns. One is coordination of message. Even if the accounts are unattributed or hidden and you’re not really sure who’s behind it, they tend to share the same content at the same intervals about the same time. And so when they do that, they send off a signal. It actually changes the information landscape so that the discussion takes on a totally different tone or new issues emerge inside of it.

The other thing that is classic of it is they will share Russian state-sponsored propaganda. So it wasn’t really hidden in terms of the accounts, but what they were sharing was RT or Sputnik News. Oftentimes, they would share this almost in unison. The other part of it is synchronization. You’ll actually see a synchronization such that you would see social bots emerge around these discussions and they would amplify the conversation, making it look much bigger in life than it actually was behind the scenes.

So if you put yourself back in 2014, and as you’re looking at the Syrian issue, you’re seeing something a little bit different emerge. What were your suspicions at that time?

Yeah. So from January to really the summer of 2014, the behavior of the accounts that I was encountering was different. Extremists tend to talk about an issue or maybe talk in the account for a while but they’ll generally let it go. They’ll move on to whatever the issue might be. These accounts had a sustained purpose and they tend to work almost in unison with each other.

They would share the same content oftentimes and they would appear to be in very different locations, almost uniformly spread around the Western world. Some would appear in Europe, some would appear to be located in the United States or maybe Australia. And they would push content that was always pro-Russian. So even if it wasn’t just about ISIS and Syria, they might talk about another foreign policy issue, like Ukraine, that was important at the time. The other part that was really curious was what we stumbled onto in March and April of 2014, which was a link to a White House petition that was called “Alaska Back to Russia.”

We thought the petition was quite odd but there’s oftentimes prank sort of petitions put up on the website. But even a lot of the Twitter accounts that were posting this link were speaking in Russian. The language was Russian. And so the more we watch them, the more you start to determine that all of these issues, all of the links they were posting from state-sponsored outlets, all of the propaganda they were essentially pushing, link back to only one place, and that was Russia.

And ostensibly, whose accounts were these?

They would look like Western citizens. Or they would just look like anonymous accounts. They might have plants or cats, dogs, whatever it might be, and they would try and blend in with organic audiences that were in the Western world. So they were tailored or designed to look like Western populations or normal citizens of Western populations. And they weren’t always just talking about pro-Russian issues. A lot of times they would just retweet or repost content of the audience that they were trying to infiltrate.

Do you remember any examples of the types of accounts that you encountered and who these people were pretending to be?

Sure. I mean, there were several examples going all the way through 2016. I think the most prescient one going into the election was @TEN_GOP, which was designed to look like the Tennessee GOP Party. And that account gained, I think it was over 100,000 followers, but it would be talking about U.S. politics or trying to influence around U.S. politics. At the same point, there was another Tennessee GOP account that was the real one. So you’d see these sort of competing dynamics play out in this audience space where these accounts would pop up. They would take on the language and even the words oftentimes of the audience they were trying to take on and then slowly shift the narrative to whatever they wanted it to be.

That’s mostly on Twitter. How were you seeing Facebook at that time, if at all?

Yeah. Facebook would mostly be synchronization or mentioning of Facebook links or Facebook accounts, which you oftentimes can’t get into. You can’t necessarily friend them. But what they were doing at the same point was creating what I call a multiplatform influence operation. So they would use anonymous posting sites, maybe like a Reddit, a 4chan, or an 8chan to post forgeries or influence discussions. They would use Twitter to propagate the message worldwide because it’s the most effective way to get your message out. But Facebook was about infiltrating audiences on social issues predominantly going into 2015. And that was to push into the audience space and try and ingratiate themselves, win over followers and support, and be able to essentially place information or paid advertisements.

Kind of parse that out for me. I don’t know what the audience space is. So for like the uninitiated viewer, what does that mean? Pushing into the audience space, creating communities and how you were seeing that in real time.

Yeah. So if you want to identify different audiences that you want to influence, you try different issues, and this is what the Russian influence system does really well. They’ll identify on Twitter, Facebook, any other social media application where different audiences they want to infiltrate are.

They were trying to infiltrate on both the political right and the political left in the United States because then you could essentially meet the equation they wanted, which was to bring up support for now-President Trump and turn down support and turnout for Hillary Clinton as a candidate. So it was important for them to infiltrate across all issues. The other thing that they do is they dual-purpose; so they try and find any division in the United States and infiltrate both sides of it. So race issues, religious issues, any sort of social issue like Second Amendment rights or abortion. They will try to go into those audiences so they can keep tempo with it and then later, in an election year, try and influence people in their vote.

Well, start where you’re starting to see Facebook as a vector for some of this stuff.

Right. Facebook became a vector probably in 2015. In 2014, most of what I had observed on Twitter was about two issues: Syria and Ukraine. That was the big foreign policy issues for Russia. But in 2015, you saw a deliberate shift toward American social issues. So you would see content that surfaced on Twitter or in other places, even in my own Facebook feed, that looked suspect and that tended to focus on any sort of social issue that was divisive in the United States and also polarized around political issues.

One of the biggest ones was Jade Helm 2015, which was a military exercise in the U.S. Southwest. And it was the first time I started to really become worried that Russia was gaining outsized [inaudible] influence. And it was a conspiracy that the Obama administration was deploying troops to the southern border in order to declare martial law and take people’s weapons away. And really, it was a military exercise that was being used to train U.S. military for going overseas so they could learn what it was like to work with indigenous populations. That really took off in 2015. Another one was Black Lives Matter, for example, and police – you know, for police, against police. You would see memes shared, and memes oftentimes would start possibly in Facebook but shift to other platforms.

And actually in your feed you saw a post or something about the military – supposedly military exercises in the Southwest?

Yes. I had friends in my own Facebook feed who would share this content. And I wasn’t sure, based on Twitter or Facebook, whether it started in America or if it was Russian influence. But it was things that I would see the Russian accounts from 2014 start to populate and repurpose, and I would even see it posted amongst my friends in the United States.

And who were you telling about what you were finding or observing or suspecting at that point in time?

Yeah. By 2015, I was briefing select slides and saying, “Hey,” you know, during the middle of an ISIS briefing or at the end of an ISIS briefing, “if you’ve seen influence before with the Islamic State, you should see what Russia’s doing. It’s much [more] significant.” And I started publishing for the first time in the fall of 2015, at the Foreign Policy Research Institute, noting the fake Facebook and Twitter accounts that were popping up in the U.S. audience space.

And in terms of how high up you went, either in the intelligence community or the Department of Defense, were you getting an audience with people who actually could do something about this?

I don’t know that they would do anything about it because they were mostly counterterrorism. I think the confusion [that] often comes with me is that I came from a counterterrorism background, so I wasn’t interfacing with people who knew much of anything about Russia. So I would do it during counterterrorism briefings and they would be interested. But I think the American public forgets what a threat ISIS was in 2015 and ’16 and that was the predominant effort of everybody I interfaced with in U.S. government in those two years.

But were you considering what you were seeing coming from Russia as terrorism?

No. I just saw it as political influence. And I think that’s part of the reason why the U.S. government didn’t know how to handle it. No one’s really in charge of defending the U.S. population against foreign influence. No one has that mission. Counterterrorism’s very straightforward. Everyone knows what their role is. But even today I couldn’t point to who in the U.S. government is responsible for protecting the U.S. government or U.S. people from actually being influenced from afar.

And what about within the halls of government,who was paying attention to this as a potential threat to national security here?

I have no idea who in the U.S. government – I still don’t today – really has the ball for countering Russian influence. It’s not entirely clear. The Global Engagement Center [of the U.S. State Department] has taken that mission on since election 2016, and they’ve really expanded their mandate. But in 2016, I was writing about it openly at The Daily Beast and later War on the Rocks. And it wasn’t until after 2016 that it really got strong engagement with the U.S. government.

Extremists And Social Media

In terms of what a site like Facebook or social media offers to someone – a bad actor – why is this such a good environment, a vector to do harm?

Social media provides a window into an audience that’s unprecedented, and using reconnaissance and data-scraping tools you can find out more about anyone than any time in world history. And you don’t have to be in the country that you’re trying to influence. You can [sit] afar. There are open tools. In fact, Facebook might offer you an advertising platform that tries to help you understand the audience so you can engage with it. So it really just took some imagination. It took somebody like the Russians who really understand information warfare to think through how can we use this to achieve our goals. And they did it very effectively because they have a long history of trying to do information warfare and perception management with their own population in Russia.

Why was it a field of interest to you at that point in time? Why social media?

Social media, particularly going back to the mid-2000s, was a platform that extremists were going to connect with and recruit. And so in the mid-2000s, when I worked at the Combating Terrorism Center, we used to watch Al Qaeda videos on YouTube. Al Qaeda in Iraq was very prolific there. And you could actually [sit] up and take in that data and start to map out who the insurgents might be or the terrorists might be and where they might be.

Going back to Syria and the Syrian Revolution, you could go and actually map out the Free Syrian Army and there are people who did that; just using YouTube or any of these social media platforms, you could do that. So you can only imagine, from the adversary or from a nefarious actor, this tool provides an open window. And so I was looking at it to track foreign fighters. That’s how I really got big into social media. [It] was so I could track foreign fighters going to Somalia or going to Syria and Iraq. You could understand where they were coming from and who was interested. You could understand what they were interested in, in terms of the content, and you could see essentially recruitment pipelines and track them all the way into the country. So from the open-source lens for an extremist group, it was just amazing to watch and study because it was like open data collection I’d never seen before.

And so then when that starts to shift to Russia or a focus of yours starts to shift to Russia, what is it you’re gleaning from watching that? I mean, you’re recognizing what about that?

Russia was different because they weren’t trying to convince people to come and join them. They were trying to convince people to do what they wanted, which is a very different approach. I think that’s why I was so fascinated with it.

And if you looked at any one thing that they were doing in social media, it can quickly be dismissed as dumb or ineffective. But if you watch how that campaign lined up with what they were doing during the Cold War – their program known as Active Measures – and why it was unsuccessful, social media provided them a way to do this in real time at hyperspeed and, even if they fail, to show up the next day and do it again. This was impossible during the analog era. And it was interesting to watch people essentially move or take on many of the messages or ideas or share the content without really knowing where it was coming from.

Russian Influence Campaigns

So are you saying that you knew as early as about 2015 that there were coordinated Russian influence campaigns going on here?

Yes. By 2015, it was clear they were pushing around social issues. I didn’t know they were going for the election in 2015, but I could tell they were definitely trying to win over American audiences around a range of social issues and foreign policy issues like Ukraine and Syria.

And did you see that or identify that as a pretty big threat at the time?

At the time, I was mostly just fascinated because I didn’t understand why they would want to do it or what the goal was. In Syria and Ukraine, it was pretty straightforward: They wanted to advance their foreign policy agenda. Inside the United States, it was a little [more] fuzzy, other than maybe just to turn American perceptions around [on] those foreign conflicts so that maybe we didn’t want to take one side [or] the other, or maybe we would let Syria go. [It] wasn’t entirely clear what they were trying to do probably until the fourth quarter of the fall and winter of 2015.

And what was it at that point that you started to notice that was different?

There was a much more political tone to it. In fact, there was a Sputnik News article that popped up. It said something to the [effect] of: “Is Donald Trump the man to mend fences with Russia or improve relations with Russia?” I thought that was odd. It was being shared by some of the Twitter accounts that I saw. And when I read the article, Donald Trump, at that time, wasn’t seen as a serious candidate. It was kind of a reality show, you know, meets the Republican debate.

And so I was curious about that. And that just picked up through the fall and winter to such a degree that I started to wonder, OK, maybe this influence they’ve been doing for the last year, this winning over audiences, is really about pushing toward the election in 2016.

And who are you talking to about this at the time?

At the time, it was me, J.M. [Berger] and Andrew [Weisburd], and we were trying to decide what to do with it. And at one point, I had advocated maybe we should do a write-up of this, but it seemed rather pointless to raise my signature up against the Russians, which we had heard rumors they were hacking people for no particular business reason or research reasons. It seemed kind of like a silly effort. It wasn’t until 2016 I started to take them much more seriously. I think September ’15 was the first time I wrote about it.

OK. September 2015?

Yeah.

So tell me about that and what did you write?

In the fall of 2015, at the Foreign Policy Research Institute I wrote an article about influence in the Syria context because this was the social media storm I’d been observing. And I noted in one of the paragraphs there: I don’t know why no one is talking about the fake Twitter and Facebook accounts that Russia’s posted that are made to look like Americans. But that was around a foreign policy issue. It wasn’t until the months following that that I really started to see them more discussing political issues, talking about the election – Hillary Clinton or Donald Trump – that I understood where they were going with their influence after all of that.

So after you write that article [in the] fall of 2015, what were the sorts of things that you were seeing after that, that gave you a sense that there was actually an election push here?

The biggest part of it was the overt state-sponsored propaganda from RT and Sputnik News, which was right there in front of everybody, that was talking about the U.S. election. And it was very pro-Trump, which was different from all the other content at the time. Most American content was anti-Trump, even from the GOP. And it wasn’t until the summer of ’16 that it really took on this sort of positive flavor.

The other part was how they were trying to push or sort of discuss this content with a very pro-Republican audience, which I thought was strange. So they weren’t just trying to talk to people who were interested [in] ISIS or the Islamic State. They were trying to actually talk in conversations with the American right wing, and I thought that was strange as an indication that they were trying to move toward a more political bent rather than just talking about social issues.

You were actually observing how fake accounts were engaging with the American electorate on social media?

Yes.

This is before we knew the details of these things. So like at that point, what is it that you’re observing? What kind of conversations were they having on Facebook? What kind of groups did you see them as part of at that point in time?

You could see them in many different groups. … So the Twitter accounts predominantly; sometimes it would be Facebook accounts. It’d be oriented [toward] or use the same names, essentially, as the Twitter accounts, but the Twitter accounts would be trying to discuss issues with the American right predominantly, and it was mostly those in the far-right extreme. So it’d be anti-government, maybe a militia group, white supremacist-oriented. And what was interesting about it for me was the issues they were talking about. They were talking about anti-NATO and anti-EU, which I thought was odd. Having grown up in a Republican stronghold, this seemed very strange to me.

The one, above all, though was the anti-immigration issue. That really connected with that right-wing audience. And then the fourth part was nationalists, not globalists. Nationalists, not globalists. Didn’t really understand even what that meant when I first saw it, but once I looked into it, it was about “America First” – can we put America above any sort of global agenda? And that was something that I’d seen striking about RT, Sputnik News. Those sorts of websites would talk in a very similar tone: anti-NATO, anti-EU, anti-immigration, nationalists, not globalists.

The other part, though, was on the American left, which was kind of about transparency. “What about the corruption that’s in this country? We need to know more.” And WikiLeaks sort of content that would be reposted or repurposed. I didn’t see it grab as much [attention], but you would see it surface from time to time in those discussions. It was almost like they were trying to bring those two audiences together, the extreme right and almost anarchist or extreme left of the political spectrum in the United States.

Were you writing about this? And who are you talking about this with?

Yeah. Well, in 2015 and ’16, I wrote about it in the Foreign Policy Research Institute, but the cost of writing about it seemed more than it was worth. And it wasn’t until the summer of ’16 that I started writing about it again. I’d actually done presentations about it and one of them was a domestic extremism conference in the U.S. and I had run into other people who were studying U.S. domestic extremists. And they were talking about, “Have you seen all this Russian content?” I wasn’t alone. There were other people who had discovered it, so we’d talk about it at conferences. But it wasn’t clear what to essentially do with it.

And in the summer of ’16 is when my colleagues and I started writing about it. And it was in direct reaction to Donald Trump taking the stage in Florida and asking Russia if [they] have those 30-some-thousand emails. And I thought it was time to probably start writing about this publicly and get awareness around, well, the Russians are actually pushing for this candidate to win.

Warning The U.S. About Russian Influence

And was there anyone in government that you went to? How do you raise alarms about this at that point in time? … And counterterrorism briefings to whom?

Yeah, I would do counterterrorism briefings openly in Washington, D.C. Government officials would be there, other academics would be there. It was an open discussion in 2015 and 2016, and I would brief it pretty consistently, which is also why I was invited to testify in 2017. People had seen me brief this content before. And I would talk to formal officials whenever I would just have casual interactions with them and talk about the Russian influence effort. I think it was known by a wide segment, but no one really believed that it was really having an impact on the election.

How Political Manipulation Works

And was there ever anyone that was kind of pretending to be “Debbie from Dayton, Ohio,” or were there accounts that were ostensibly Americans that were posting this content?

You mean Americans who were not …

Who were actually Russians or were kind of these fake accounts but pretending to be “Debbie from Dayton” or someone who’s just a normal American citizen.

Right. Sure. Well, whenever we looked at the campaign in the summer of 2016, we actually took down all the tweets and sort of looked at them and we were like: OK, what’s common about these?

And what they would do is they’d use similar bios and they would use seven or eight keywords, and that storm that was a very right-wing-oriented storm, they would use “Constitution,” “God,” “USA,” “MAGA,” “Trump,” because they were trying to connect with and look like that audience. And those are a lot of the Internet Research Agency accounts that were taken down. And there are great examples where they would use sort of vague names or even combinations of words that didn’t even make a whole lot of sense. But the idea was to use the bio, use the hashtags that match the audience that they wanted to engage with.

And what sort of bio did these people have or put forward?

Yeah. The ones I was observing were predominantly right wing of the United States, so they would want to look like American conservatives. They might talk about guns, they might be anti-Black Lives Matter or in some cases it would be about the Constitution or veterans. It just depended on the audience that they were trying to engage with, but predominantly it was the American right wing.

And what is it about Facebook, for instance – we’ll get to Facebook and the revelations that came later – but what is it about these preference bubbles that make these campaigns doable to this extent? I guess, what is it about Facebook that enables this?

Social media’s a fantastic weapon for influence because you can essentially subvert how an audience normally receives information. Preference bubble plays really to three biases. The first one is confirmation bias. And [on] social media – whether it’s Twitter but even more so on Facebook – you tend to get in communities where you try and seek out information that confirms what you want to believe or what you already believe. So as a Russian propagandist, if this audience is a veterans group or a right-wing group or a left-wing group, it doesn’t really matter. Just keep feeding them content that matches what they’re already clicking on.

The second part is implicit bias. You can skirt essentially how the source of information is delivered to the user. Instead of the source of the information being RT or Sputnik News, the source of the information is your friend. You’ll take it in [whereas] you might have rejected it if it was from a foreign country.

The third part is – essentially what sets in over time – is status quo bias. Once you get in the group, you don’t want to post anything that challenges what the group is saying as the majority because you could get pushed out of the group. So you’ll go along with the status quo even if you don’t necessarily believe in it. So content can get pushed in. Let’s say it’s nationalist, not globalist, or anti-immigration content. [It] can get pushed into your preference bubble and you will go along with it tacitly by ignoring it. It makes the rest of the group think you must agree with it or you would say something. This is the danger of preference bubbles and why it’s such a unique weapon for a foreign-influence operation if you can infiltrate it.

Facebook And Filter Bubbles

And this was something that you were recognizing at that point in time, meaning this is something you were recognizing in 2015 or so?

I had seen preference bubbles pop up even with extremists before 2015, in the sense that they create a virtual world where they try and influence people and share only the same information sources and bombard the audience with things from their friends and people who look like them and talk like them. So this phenomenon was already popping up, and I would even see it in my own counterterrorism discussions. People wanted to be part of the crowd and you influence each other. It’s much more difficult to go against the crowd. It’s much easier to just succumb to the crowd and sort of go with what everyone else is saying. So this phenomenon had already started to occur as soon as social media really was born.

And just describe what a preference bubble is.

A preference bubble from Facebook’s perspective would be: You friend people, all your friends and colleagues through the course of your life, and you share information back and forth. The bubble hardens whenever you start to share information sources because you are only clicking on that which you like, which then percolates back into the bubble of your friends and family, whoever it might be.

You also are only taking information in from people who think like you and talk like you. You don’t friend people who anger you or think differently from you. And whenever you get into negative discussions, what oftentimes happens is you mute, block, or unfollow or unfriend somebody inside your preference bubble. That walls you off from any opposition. What ultimately happens is you go into a virtual world that isn’t actual reality. It’s one that you’ve selected. It’s the preferences that you’ve selected, which create this virtual world, your preference bubble, and you are in it. It’s much harder for you to take in information from the outside, and your friends and family are confirming to you what you want to believe rather than maybe what is true or, in fact, what’s happening in the real world.

And you were recognizing this as a potential vulnerability in terms of how people could be manipulated? Describe how you saw preference bubbles at the time.

Preference bubbles are powerful, whether you’re extremist or a political partisan, because you can bring somebody into this virtual world and then the other thing you do is you discredit or you refute all alternative media that challenges you. You call it fake news or you say: “You can’t believe those people because they’re not like us.”

The other thing you can do if you have a good preference bubble is bring people into an application. So you can even sidestep a social media platform and say, “I’ve developed my own app which is our news source. It’s our information source for our preference bubble and I just want you to come here to access that information.” This is particularly powerful in a political context. If you can take people from the open, bring them into social media, narrow their choice of media so it’s only the media you want them to get and deliver it to them, and then bring them to an application that solely controls how they consume information, that’s the hardening of the preference bubble. And it’s really what we’re seeing happen in our country and around the world today.

And it seems like social media is designed to do exactly that. It’s meant to feed you the information that is most personalized to you. Right?

This is the challenge for the social media platforms. They’ve created a platform that’s designed to give you what you want when you want it. And when you put your friends on it, you’re all now giving each other what [you] want when [you] want it. And when that happens you’re isolating yourself off and creating an alternative world. You’re choosing facts that may or may not be true. You’re choosing information sources that may or may not be real. They can be alternative or just popped up yesterday and essentially what has happened is the platform, in pursuit of delivering you what you want, has become this weapon for influencers where they can actually put you in a virtual bubble, the false reality that isn’t true at all.

Lessons From The Arab Spring

And you think that the Russians were recognizing this in 2014, 2015?

Yes. My theory is the Russians identified this with the Arab Spring. They saw this as the first time where a population could be mobilized just through social media and information sources to move in a specific direction and achieve something without much military cost at all. They had already sort of tried this system out on their own people in the sense that social media became a way to essentially inundate the audience with so much information [that] it’s impossible to know what’s true. So any sort of challenge to the government, you just flood the audience with so many alternatives that you can’t really figure out what is true or false. You just withdraw.

They knew this already. So when they saw this opening essentially in the United States, in the Western world and Europe – when they saw the opening of social media, it was a way for them to do the information system they wanted to do, to erode us from the inside out, back in the Cold War, at a very low cost without ever setting foot in the Western world.

And how big a deal was it that Facebook, for instance, became a news source for people? Can you kind of set that in context for me in terms of why that’s important?

Facebook as a news source becomes very problematic because if you’re only getting information you want or information from people who talk like you and think like you, that news that gets through is a very selected version of the information world. It’s not actually all of the information world. So people will send you information that confirms their belief and your belief, but they’re not going to send you anything that challenges it. So if you want to sow a conspiracy, this is a very powerful tool. You just inundate the audience with more information that suits your needs. So generally, in the theory of information, there are four things about information.

You tend to believe that which you see first. You tend to believe that which you see the most, which social media is very useful for. You tend to believe things that are not accompanied by a rebuttal, which social media’s perfect for because you can block it out. And if it comes from a trusted source – so if you establish trust with an audience by playing to their political views – you can reinforce that confirmation bias even more.

Facebook And Filter Bubbles

I’m just trying to put it back in time. When was it that you started to recognize that preference bubbles or people being in their own little news ecosystem or echo chambers as something more problematic than just the theory, but that this actually could be used for active measures?

[In] 2011, I started running surveys on social media and I would send them out to colleagues doing counterterrorism [research] and I would ask their opinions. And no matter what I asked, they always went and fell back on their biases. Essentially status quo bias, confirmation bias, all these psychological heuristics. And I couldn’t actually get good predictions. The idea was the crowd is always smarter than any elite core and you could crowdsource anything. This was very popular at the time with social media applications. But their answers almost universally were wrong whenever I summed them together.

And I started studying the wisdom of outliers, which were people who had special knowledge, travel, that would actually go against the crowd. And they tended to be more accurate on some of the predictions that I was doing than the crowd as a whole. That’s when I started to wonder [if] even really smart people, when they come together on social media platforms, move and herd digitally together and make themselves collectively dumber because they want to stay inside of this preference bubble. They want to share the same information, confirmation, implicit bias and status quo bias all the time.

And when was it that you were recognizing that that then could be exploited?

The Islamic State essentially created a preference bubble like no one else before. They created an alternative reality. And I started to believe how powerful it was when there were refugees leaving Syria in droves and yet there were people being recruited from Europe, whole families in some cases, to actually go into Syria, which is a war-torn country that’s being bombed, at the same time millions of others are leaving. To do that you have to create a preference bubble where people believe the Islamic State is going to be this great nation, essentially, that they can live in.

That was really put on steroids. What Russia understood so well was how to manage this influence because they had always looked at public relations battles, that there is no fact-versus-fiction. You can’t really determine what the truth is. It’s just your perception versus my perception. They understood the value of social media to do that and you would even see their generals and the people in the media,their propagandists, talk about this openly. And they would say that there are asymmetric advantages in the social media world that allow them to achieve their objectives without setting foot on the battlefield.

Let’s go through the timeline a little bit. So the 2016 election, November 2016, [then] in January …

January of ’17? Yeah, yeah. Right? January of ’17, there’d been an intelligence report that there’d been influence campaigns on social media. Right?

Yeah.

Misinformation In The 2016 Election

So tell me that story about that report being released and how significant it was at the time.

A week before the election I saw that the race for the presidency was essentially 50/50, and I had tried to figure out a way to get some research funding [to] study Russia. I just hadn’t done it. So I thought it was a giant waste of time. And I called my two colleagues and I said it’s 50/50 and if we can’t get this out before the election, everyone’s going to think it’s some sort of a political stunt, what we’ve been watching over the last few years. So for a few nights I stayed up. We took turns writing essentially an overview of how Russia had rebooted their influence system for the digital age and I wanted it out before Election Day. War on the Rocks put it out two days before the election and nothing happened. I think a few people read it, but it really wasn’t received in many circles beyond people who are interested in influence and social media.

Let me just ask you this. In essence, what did the article say?

The article detailed how Russia uses overt-to-covert – from social media in a covert setting, to overt state-sponsored propaganda – to influence the audience in the United States and really try to advance their agenda; but also how they were trying to push for President Trump and against candidate Clinton at the time. What we wanted everybody to understand, too, was there was a second part of that, which is really how Russia wanted to destroy democracy by undermining fact-versus-fiction and constituent trust in elections. Election rigging [and] voter fraud were two big themes going into Election Day and they really piggybacked on those.

It wasn’t until about three or four weeks after the election that I started getting some calls because the allegations of fake news were going wild and people were trying to calculate what had really happened in the election. Were some votes pushed one way or another? There were talks about election hacking. Did it really happen or were votes changed? And so that finally brought some interest into what we had been writing about just before the election.

Did you ever hear from any of the social media companies like Twitter or Facebook?

No. I’m trying to think in terms of dates. Not until well after. No. Not in the run-up to 2016. After, I did make contact with them and have talked with them, and they actually have talked to me about the things they’re doing to try and improve security on their platforms.

So what then happens, when you describe the fake news and three weeks after the election, what did you make of that moment in time? Where was your head at that point in trying to figure out what had really happened here?

I was trying to assess whether the vote had really been changed in terms of hacking and I didn’t believe it had. I thought the hacking was always a secondary effort, which was just trying to create the illusion that maybe your vote didn’t count. What I was most worried about on election night wasn’t who was going to win but who would maybe show up to a ballot box thinking there’s a fake election – might show up with weapons. And that played out in a different context with Pizzagate a few weeks after that, where someone showed up based on a child sex ring allegation in Washington, D.C. and showed up with a weapon to investigate it. That’s where I was mostly focused at the end of 2016. And it wasn’t until that report came out in 2017 that I finally got a different sort of independent verification that what we were seeing was what we had been watching all this time.

Ah, so bring me to that moment. So what happens in January 2017?

The intelligence report had come out and I had seen the sanctions as well. The Obama administration right at the end of December had sanctioned [Russia] around some hackers and they were hackers that were known. I had seen them and other contacts before and had researched them. And so it gave us a trail that we could look back [on] and it would match up with what we had been seeing in the two and a half years leading up to that. Then in 2017 is really when the Russian influence discussion took off. There was a lot of discussion about it. I started writing more about how Russia wins elections because it wasn’t over yet. The other part that America sort of had forgotten about was France and Germany were the next ones to really have elections coming up. So there was still more to be done and there was more to monitor.

And so you felt what when the intelligence report comes outin January of 2017?

By January in 2017, I felt better in the sense that there was some awareness of it, but also some discouragement because the president essentially declared it fake news, did not believe that it really happened. [It] seemed to go against the U.S. intelligence community’s assessment. So I became quite nervous. I actually wrote up some assessments of what is the Trump campaign’s involvement with this – how would it line up in terms of Russian intelligence effort? And [I] started to get worried more about not only was this effective but maybe is it corroding or eroding essentially U.S. institutions that would combat this sort of influence.

Facebook’s Response To The 2016 Election

And at that point in time begins a number of investigations into what really happened. What was your sense of Facebook’s response to the initial investigations into what had happened on their platform in 2016?

Facebook’s initial response [from CEO] Mark Zuckerberg was a little disappointing because I thought he was trying to downplay the effort. And at the same time, you saw political campaigns trying to downplay it because it could undermine whether they won legitimately or not. And so I understood the nervousness about the legitimacy of the election, but I also saw it as a cop-out and not being responsible about what had happened on their platform.

In what way? Why a cop-out? Why do you use that term?

It seemed like they didn’t really want to look deep into it. And I knew that these accounts, whether it’s Facebook, Twitter, had not gone away. They were still operating as usual and they were still trying to influence because the U.S. election was just one part of a broader campaign by the Kremlin. They were now focused on France and Germany, those elections that were coming along, and trying to win those as well. So I thought it was important that they immediately swivel and try and get their hands around this influence that was going on. And they ultimately did. Facebook shut down a lot of accounts going into the election. I believe they claimed [that] around 30,000 different accounts were shut down. So I started to feel better at that point, but initially I think, for the public, it didn’t instill a lot of confidence that they were on top of the problem.

It was basically a year later that there’s the hearing in November of 2017 when their general counsels appear and there’s the revelations about the ads.

This is the same one, October 31st. I was on that one.

This is right. So if you can kind of go into some detail, what emerges at that point in time? What revelations come to light about what had really happened in 2016, and what did you make of that, because that was stuff you probably didn’t even know about at that point?

Yeah. When the revelations came forward, it was Facebook that led. They seemed to know more about what was going on their platform than some of the other social media companies did. I was hopeful in the sense that it confirmed a lot of what I had been seeing and it gave me more confidence inwhat I had been seeing and what its purposes were. But I was nervous in the sense that other social media platforms, namely Twitter, had essentially unearthed nothing at that point. And I was getting worried that they really didn’t know what was going on on their platform or didn’t have a way to identify and clean it out. Since then, you’ve seen pretty steadily where Facebook, Twitter, Reddit, other platforms have gone through and identified content, removed it, and essentially pushed it away. Even YouTube and Google have gone through versions of this.

But it also told me that they’re not working together very well. To understand how a state or a terrorist group is influencing a population and maybe even pushing them toward violence, you have to have a multiplatform effort. They don’t just use one platform, they use all these platforms. And knowing or detecting what’s happening on one social media platform like Facebook may oftentimes come from seeing what happens on Twitter, YouTube, Telegram, whatever it might be. So I was nervous and still am today that they’re not working together well enough to really protect all their platforms as an industry.

How Political Manipulation Works

So are you able to bring me through specifically some of the things that came to light? You mentioned just now that it could push people toward violence. But there were a few revelations about various rallies, like real-world, active measures that actually came to life via Facebook.

Yeah.

Can you kind of elucidate what happened there? What came to light there?

Yeah. What was most illuminating and confirmatory about what came through was that they were using content in a dual-purpose way. So they were posting advertisements around social issues, which might be for or against religion. We saw this with pro- and anti-Islam protests in Texas. Or they were using content in ways to redirect to try and anger an audience. This might be a Black Lives Matter protest and then a pro-law enforcement sort of response. This content – the memes, the advertisements, the discussions – are meant to provoke one audience and align them against another audience, and they were actually bringing people together in the physical world without actually showing up there. I think the most illustrative and scary version is they put an anti-Clinton, pro-Trump rally together down in Florida and actually had people act out parts in it by coaching them over the phone.

And this shows that people don’t really understand the content. But once they’re engaged with the content, they’ll actually be manipulated by somebody physically in a telephonic discussion to actually provoke or do an act, and that’s an amazing power.

This would be the equivalent of Anwar al-Awlaki, if you remember, during the terrorism days. We were worried that he might encourage or inspire someone to do an attack on the United States. Think about this capability with social media – if you can get people to fight each other in a foreign county or you can actually contact them and ask them to do something on your behalf. …

… When you hear a revelation like that, what goes through your mind? I mean, this is what you studied, this is your big concern.

The thing I worry about the most is Russia or any state actor or even an extremist group pretending to be somebody they’re not in social media [and] infiltrating a group that is already moving toward violence and then provoking them, giving them some sort of ammunition, giving them instruction or guidance or picking targets, and encouraging them to actually do an attack or conduct violence inside the United States. And they may not realize who they’re actually talking to or who’s nudging them along.

This is the most extreme version of that case and I think it’s really important to understand [the] social media influence. The goal isn’t to hack into an electric grid and turn it off; it’s to hack into someone’s mind and convince them to turn the electric grid off on your behalf and you be hidden the entire time and no one knows that you’re behind it.

What could a company like Facebook do to prevent a scenario like that from playing out?

The big challenge for Facebook is they’re trying to protect freedom of speech, freedom of the press, and make sure that they have a platform which allows people to rapidly share and display content. So the responsibility, essentially, the one thing they can do is ensure authenticity. Are people actually who they say they are? If they can’t guarantee that, then it allows any manipulator to step in and dupe somebody for a nefarious cause. It could be criminal state-sponsored influence or even extremism. This is the big challenge for them and it’s very difficult for them to police the bad thoughts or bad communications of people who are acting within the terms of service on their platform.

So do you think this is a solvable problem?

I think the problem can be solved if the public and private sector work together and social media companies work together as an industry. These are the two big gaps right now. Look at hackers, for example, against financial institutions or the energy grid. They may compete, banks may compete, but in terms of securing their systems, they oftentimes work together for information security because if any one of them is harmed by malware or an attack, they all suffer.

This hasn’t happened yet in the social media space. They still operate independently trying to detect, just based on their own signatures, somebody who’s not being exactly who they say they are or trying to nudge people along. The other part of it is oftentimes the U.S. government may detect accounts or intelligence efforts or nefarious people and know who they are online. But do they necessarily have a good way to communicate that to the social media companies? And vice versa, the social media companies might see nefarious waves of activity but not know who’s behind it. And so if it’s not a violation of their terms of service, they can’t necessarily do anything. So building that public-private partnership to make sure that bad activity isn’t going on on the platform and violence isn’t emerging, I think is a critical next step.

So right now, going into the midterm elections, how would you grade the communication between our intelligence agencies, the FBI, and the tech giants like Facebook?

In terms of Facebook, I think internally they’ve made massive gains in the past two years. They’ve actually put a lot of resources into it, and you’ve seen them do a lot of changes in their terms of service, and they disclosed a lot of accounts that they’ve taken down. What I am uncertain about is what the communication is like with other social media platforms. We just saw two weeks ago [July 31, 2018], Instagram and Facebook accounts were taken down for trying to infiltrate around the election.

But have we paired that up with any other nefarious activity on other social media platforms? And can we help Facebook attribute that? Does the U.S. government know if those actors are really Russian actors or some other actor maybe that’s using a platform in a nefarious way?

But at the end of the day, are we not basically trusting Facebook and leaving it at their word for what they’re doing here?

Yes. The big challenge of all of this is: What is the government’s role about policing bad actors on social media? At this point we’re just hoping that the social media platforms will protect us from these sorts of things, but they’re not really designed to be intelligence agencies. They don’t know who’s always on the other end [and] if they’re trying to be duping people online.

What we don’t know is what the role of the U.S. government is. We saw a very unusual press conference a couple weeks ago [July 16, 2018] where different agencies said this is what my agency is doing essentially to protect against foreign influence. But is that really integrated as a government strategy? At this point it’s very hard to determine what the overall plan is – just what a series of actions are that the U.S. government’s taking.

So what’s your big concern going into the midterm elections?

My biggest concern is that a hack won’t actually change the votes, but it will just create enough doubt in the U.S. population that people don’t believe that their vote counted.

The whole idea of election fraud, rigged [voting] systems, this all is nefarious influence and it undermines democracy. And so without even being successful, without changing a vote or even changing a roll, if you can just create a hack which provokes a media storm, you will have influenced the population and cast doubt about the integrity of our elections.

Do you think we have a proper accounting of what actually happened during the 2016 election when it comes to the Russian influence campaign here?

I think we’re pretty close in terms of what actually happened in 2016. The things that we’ll never be able to know are: Did that influence ever actually change the election? We can’t dissect it from all of the other things that happened in the run-up to the election and there’s no accurate polling. Going into the Election Day, most people thought Hillary Clinton was going to win. So what would you compare it to? There’s no way to really know. I think what we don’t have a handle on is how much of this nefarious influence is happening on our platforms today, what’s happening across the ecosystem, and what’s happening around the world.

I’m not so much worried about Russian influence in 2018 as I am about authoritarians who are using this to suppress their own people or mobilize people via Facebook to punish another sort of threat or use WhatsApp to sow conspiracies. That’s what I’m most concerned about, because we can talk about elections all we want, but this happens in every country in nearly every context. And how do you police the whole world, hundreds of countries and many different languages? How do you police influence on these social media platforms all at the same time? It’s a very difficult chore for any social media platform.

I mean, difficult chore or an impossible chore?

It’s impossible in many ways to do this without some sort of public-private partnership. You’d have to have some sort of connection between governments that are doing intelligence work and counterespionage law enforcement that are seeing spikes in violence. You need all those connections brought together to adequately do it because the platforms are designed where they can create fake news or a nefarious influence far faster than anyone can police it.

And are you seeing any signs that we’re moving toward the direction that you think it would take to solve this in any way?

Individual companies are making progress. The larger ones are doing better because they have more resources. I’m more nervous about the smaller social media platforms because they don’t have enough personnel or resources to actually police their platforms. In terms of the government, I’m quite concerned. I don’t know what their stance is or what their playbook is. For example, if there is a hack attempt against one of the election databases on Election Day, what is going to be the game plan when people start showing up to the polls and protesting or claiming that it’s a fraudulent election?

This could be a political group or an activist group or a foreign nation that could influence around this. I don’t really know what that playbook looks like or what our response is. And today, our president doesn’t even want to acknowledge that it happened two years ago.

When I asked you whether you thought there’s been a full accounting for what happened in 2016, you seemed pretty confident that we know a lot. I mean, what makes you confident about that when we’re kind of taking the word from these companies about what really went down on their platforms?

I think the social media platforms not only suffered a lot of government attention, which has led them to really investigate this fairly thoroughly, [but] they’ve suffered customer resentment and pushback. People don’t trust the platform and they’re disengaging with it or they’re not wanting to share with their friends. And I think that customer and user outrage has really pushed the social media companies to take it much more seriously and try and re-win the trust of their users with the actual platform.

And what if there were groups other than the Russians, or what if there were other nefarious actors influencing or trying to influence during the 2016 election? I mean, I just wonder how anyone can be confident given we basically have to take their word for it.

I think that’s correct. We can’t know for sure. For example, we focused excessively on the Russia effort and for good reason. We want to know if a foreign country was tipping our election. Well, we don’t know any of the other actors out there, maybe political or social manipulators who want to push an audience under nefarious circumstances using fake accounts or distributing false information, who might’ve influenced the elections. Well, we don’t know that as well. For example, in Macedonia there are clickbait farms which put out essentially false news stories and they use the clickbait, the ad revenue, to power themselves. How much effect did they have and what were their connections maybe to a state sponsor or nonstate sponsor?

So we don’t know that. I think we feel pretty good about [knowing] what Russia did in terms of the influence, but there’s many tentacles to this on any given election and I’m not sure we could investigate all of them if we wanted to.

In November of 2017, Facebook comes forward with the amount of ad spending that the Russians did on Facebook – roughly $100,000 of ads and X number of impressions on Facebook. What did you think at the time when that’s what their revelation was?

I thought the Russia Facebook ad project was essentially the smallest component of their overall influence system and probably the least effective. The most effective and most efficient way to do influence is to use organic content from the audience that you’re trying to influence. So when you see the audience take pictures or you see the audience actually participate or create a meme, just take that content and repurpose it, reuse it, resend it into the audience space.

The ads are the tougher way to go. You have to create content that not only looks like the audience made it, engages the audience, and then get it shared. That is the most expensive and the longest way to go about doing this. So you’re always better off to try and get the audience to create content because you know it’s successful already and repurpose that. This is why, for example, if they create a physical event like a pro-Islam, anti-Islam, or an anti-Hillary Clinton campaign event and a pro-Trump one, they’ll often try and encourage people to take pictures at the event because that is an organic event, or it appears to be organic, and that content is a picture created by the audience, which they can then re-share. You’re essentially enlisting the audience you want to influence to create things you want to influence them with.

So when Facebook comes forward and says, “All right, this is the extent of it. It’s $100,000 worth of ads,” did you think there’s a whole other part of the picture here that they’re leaving out?

Sure. I assume that they had created different events pages, which I think are much more powerful; that they had repurposed content and injected it into the audience space; or created entire groups that were under false auspices. And we saw this just in the last few weeks, that you essentially are closing down more accounts. And so I don’t think at any point you can ever walk out and say definitively, oh, we shut down all of these accounts. It is a really tough sell to say that you’re 100 percent perfect at shutting down all of this influence. I’ve always assumed that there’s more out there that we just don’t know about.

So when Facebook does that though, if you put yourself back at that moment, did you think that that was an evasive tactic to basically come clean on the ads knowing that there would be a whole other organic reach to that? Did you think that they were coming clean at that point?

I think they were trying to get in front and show and demonstrate that they’re being aggressive about essentially cleaning up their platform. I didn’t understand why they tried to minimize the effort. I found that frustrating because one, you really don’t know the effect of any given ad, post, click, like, share, on any platform. You don’t really know how that changes people’s minds. And we have the least research opening on Facebook. It’s a closed system, so they’re the only ones that really know what the reach is. And I’m not sure they even know what the impact of that is and it’s not something we can measure from the outside. Twitter is an open platform, so you have a lot of researchers that can assess what the reach and impact of things are much more than you can with a closed system.

So when you say the impact of something, what do you mean?

Did it actually change people’s perceptions? Did they take content that was shared with them and recommunicate that maybe in a different fashion? For example, was nationalism, not globalism – that meme or that sort of ad or that sort of push – did the audience take that one and repeat it then as if it was their own words? That’s the ultimate goal in influence. I’ll give you a more recent example. If you have crowds chant, “Russia is our friend,” that is a huge change in public perception in the United States. That’s a behavior change. That’s the ultimate goal of an influence effort. And I’m not sure that we really know in the Facebook world how that impact actually plays out.

So essentially, is there any way of knowing how the 2016 election was influenced by foreign actors?

No. I don’t think we can actually put a number of votes or a percentage on that. But I think there are some noticeable things that we can look at. You have a population shift in the United States that actually is far more supportive of Russia and Vladimir Putin than at any time, particularly since the Cold War and maybe even during the Cold War. That shift is dramatic and it’s happened in only a few years.

You’ve seen people in the United States show up to protest and sit and chant in unison, “Russia is our friend.” That’s unheard of and really something new in our country. You’ve seen people show up with T-shirts that say, “Bashar al-Assad’s [barrel bomb] factory,” you know, T-shirts. That is a cause that Russia has promoted, Assad. So that is taking [hold] in our country. You’re seeing people fly a Russian flag or wear a T-shirt that says, “I would rather be a Russian than a Democrat.” I think that’s a powerful sort of behavior change that you see in this country and it can’t be ignored from the fact that Russia has been trying to influence us.

All of this works with an assumption that human beings are very vulnerable, it sounds like from what you’re saying, to manipulation, or certain groups are. A critic might say we all have free will, we’re not lemmings, we can think for ourselves, and all of this influence campaign stuff is pretty much overblown; that 2016 was just a very basic effort by the Russians, and that there isn’t really much to this influence idea.

Yeah. I oftentimes hear that it’s very basic, but very basic repeated day after day, hour after hour, minute by minute, on social media is far more effective than anyone can do in the analog era. People on social media consume way more information than at any time in human history. And it’s about repetition. I [said] before that the thing you tend to believe is that which you see the most. Imagine 1980 to today. You can actually hit somebody with the same message thousands, maybe tens of thousands of times, and it will change their perception both of reality and what they believe in. You can nudge someone far more quickly. And that might be around a product. Advertisers can do this. But when it’s about foreign policy or about fact and fiction and policy debates or whether you should get shots or not get shots, this changes the way the public actually behaves. And [it] becomes a huge public safety issue if you have a very sophisticated manipulator with a lot of time a lot of resources [and] who does this in a very committed way and doesn’t stop.

And that’s going on right now?

Yes. It’s going on right now. Russia does this in terms of their own influence, but I would caution that the big fear going forward is everybody adopting that playbook. Authoritarians are going to use this to try and control their own populations or shift foreign opinion. You will see political manipulators and activists now, especially those in power, with artificial intelligence that are going to use this in a very dominating way on the populations they want to influence. And then the other part I think that we need to look for is what I call trolling as a service. Anybody with enough money can hire a firm that will create fake news, false accounts, to look like any population and try and push them toward their agenda, and the public won’t know that there’s actually a hidden manipulator behind the scenes that’s changed their perception of the world.

… So the interesting position that Facebook is in is that, one, they want to downplay how influential they are, but at the same time, their entire business model is all about selling advertisers of every stripe how influential they are.

Facebook’s in a tough position because they’re going to give a business development pitch maybe that says our advertising can reach anybody in the world and it’s the most effective. Then they’re going to turn around and say foreign influence is not that nefarious and we’ve got control of it. And it can’t be both. You can’t say both of those things at the same time and really have a platform that is authentic and has good integrity.

Deepfake

Are you worried about deepfake? Do you know about deepfake?

Yeah. I can tell you about the three things I’m worried about that.

OK, what are they? Because most people don’t know, but I’m just curious.

Yeah. There’s three things I think people should worry about with social media going forward in terms of influence: political, social, nation-state, whatever it might be. One is machine learning and artificial intelligence, because if you give it to me, for example, I can mine a crowd’s information so quickly [that] I can figure out what they will be influenced by and how I can dupe them.

The second part is deepfakes, which is the creation of false audio, false video content that looks so real that you’re not sure whether it happened or not. You can make any world figure with enough content online say just about anything. And for those that aren’t good at evaluating information sources, they will be fooled into believing things that may or may not be true. It could help you win an election or it could help you mobilize a whole crowd or even start a war.

The third part, I think, is chat bots essentially, which is the creation of social bots that pass the Turing test, which means they’re indistinguishable from a human. A machine can be just as effective and people will believe that it’s a human they’re talking to.

Imagine if I can create 100 chat bots that all look like real humans and they engage with you thousands of times a day and tell you the same thing over and over and confirm my agenda. It will change your view of what is true. It’ll make you take on or at least assess things that you might otherwise have ignored.

And everything that you’re talking about already exists?

In some form, it exists today. Machine learning is already taking off. Deepfakes, there are some cruder versions but they get more sophisticated every day. And the bots that pass the Turing test, there are some that do that now and they’ll only get better.

So what’s at stake here?

I think over time it’s about how do we want our world to be. Do we want to shape our world with reality and real-world engagements,or do we really want to just stay in a preference bubble and have our beliefs confirmed to us? Have our friends share content with us that we like? What can happen is over time we can live in such a virtual reality that the real world changes behind it; that we shape the real world to look like a false horizon or really a false world where things like disease crop back up because we believed a conspiracy about shots. Or that maybe we go to war under false pretenses by a hidden manipulator that has something to gain. Or that our economy sours because we don’t have trust in institutions.

All of that, I think, is really at stake because those that have access to social media and information sources, those that can manipulate it with technology, have a distinct advantage, and that plays really into the hands of the rich and those with the greatest AI [artificial intelligence].

The [Special Counsel Robert] Mueller indictment. What did you learn that was new and revelatory when the Mueller indictment came out?

The first one?

The first one. Yeah. So it comes out and what is Clint Watts thinking at that point in time? What are you learning?

Thank god. I think when the Mueller indictment came out I felt relief that everything we have been seeing is now being documented [including] the detail behind the scenes. So what I thought fascinating about it was the things we can never see. One of the parts of it that I thought was particularly valuable is the coverup – essentially, someone at the troll farm saying, “Hey, I was at work today. They know about us. We need to cover this up.” I thought that was confirmation that I could never reach just by watching social media in the open.

… So what does that mean?

The Mueller team was onto the Internet Research Agency and the U.S. government was. And so the troll farm tried to cover its tracks, but it seems as if they had intercepts, they had real communication intercepts that were tracking their conversations back and forth and the cleanup they were trying to do. I thought that was a very powerful sort of confirmation of how big and large-scale this was and how organized the effort was from Russia’s end.

So what did the Mueller indictment reveal?

The Mueller indictment revealed what a systemic operation this was at the troll farm. The Internet Research Agency had people with different roles. They had a workflow, essentially a process by which they would issue themes and then messages and content in terms of development, and they had specific goals and they would change in unison. And this matched what I had observed from the outside. So it was good to know behind the curtain how they were trying to operate the system.

Ads In The News Feed

I’m wondering if you could just, for a layperson, describe the difference between ads and organic content and how they feed one another.

From a Russian influence perspective, there’s organic content and there’s ads. Ads are content they created where they wanted to either promote a theme or a message or an event, and they were trying to drive people toward that content. Organic is audience content, which is essentially discussions that are happening in the audience already that they just want to circulate and repopulate back into the audience. That creates that confirmation bias, which is: the more you see something, the more you’re likely to believe it. And if you can take it from somebody that’s actually in the audience and send it back into the audience space, it’s that much more effective and much cheaper to do.

And so you think that the organic content was actually more important than the ads were?

Organic content is always going to be more effective because it actually comes from the users and you know what’s successful. So you can actually take content that you already see moving wildly, you know, virally around the audience space, you can grab it and repurpose it. You don’t have to product test it. The tough part with ads is you have to make sure it connects with the audience. And if you’re in Russia’s position, you can’t really product test something in an American audience. You just have to put it out there and hope that it works.

So for example, [with] the Blacktivist account, the goal was to essentially engage with an audience that was worried about Black Lives Matter or black issues in America. And to do that, you place an ad out there. When someone clicks on it, it can start a contagion effect. When they click on it, they now pass it into their audience space and it looks like organic content of the audience. People don’t assess the source [of] the actual ad; they assess the person who delivered it. That’s the source of the information. That thing can move and virally migrate around an audience so then no one really knows what the true source it. The Blacktivist account is the true source.

All they know is they like the person who sent the content to them and they like the content that was shared. This is how RT, for example, can have a giant YouTube presence because people don’t see it as Russia Today. It’s disseminated through people’s likes and shares. It looks organic even though the source [is] the Russian state-sponsored propaganda on the other end.

So at least during 2016, there was no way to differentiate between something that was an ad or organic content that a user had created?

In 2016, people didn’t know if it was an ad necessarily or organic content and you didn’t know what the source of the ad was, meaning you didn’t have to declare yourself as a certain entity. You didn’t necessarily have a track back. You couldn’t necessarily follow it all the way [back].

Facebook’s made changes in how they post ads now. They’re trying to verify the identity of those that post ads, so they made some changes. But in 2016, you could dupe the system so that you could inject content into an audience without the audience really realizing where it was coming from.

And you inject content into an audience at that point by sponsoring an ad?

Sponsoring ads is one way to do it. And the other thing is you can create a group, essentially, that brings ads out into that audience space, and once you enlist people in the group, you can then re-advertise to them. It’s a way to hone your advertisements so it’s engaging with those people most likely to click on it.

And so back in 2016 with the Blacktivist example, that was paid for in order to appear at the top of someone’s News Feed? Explain what that would’ve appeared like to the user.

Right. So advertisements on Facebook can be designed to go after certain audience segments based on demographics or some sort of social or economic status or whatever they know about that user. So you can actually create an ad and then decide where you want it placed – maybe geographically if you want to put it in a certain state or region. Or you could even hit certain age groups. It allows you to go after an audience in a very nimble way that you think might engage with that content.

And it shows up how then?

It will show up on your Facebook feed as an advertisement but you don’t necessarily know where it’s coming from. Or it can be placed into a group, meaning the advertisement could be placed there, you could then take another fake account or a real account, click on it, and inject it into a certain audience space. It’s a way to push content virally throughout the ecosystem.

And it may be more difficult now, but if we put ourselves back a couple of years, how easy was it to create a fake Facebook account?

Going back to 2016 and earlier, it was fairly simple to create a false identify or a false account online. You didn’t necessarily have to verify your identity. You didn’t have to – especially if you’re doing advertisements – necessarily have a real functioning entity behind it that proved itself. You could just create a false identity, maybe using a phone number which you could register even online for an internet-based phone number. So you could really spoof this very easily. Now they put in a lot of controls to verify identity, so it’s much harder to do and it’s not nearly as easy to do advertisements. And you also see them saying that political ads or social ads need to be declared. You have to say who is sponsoring. So those changes are going to make it much harder for this sort of influence to happen.

Isn’t there an irony [in] the idea that Mark Zuckerberg set out to map all social relationships and have a social graph of who our friends are, knowing full well that we all influence one another and our friends are most influential and family’s most influential? I mean, has all of this experiment just basically backfired to some degree in connecting all of us?

Whether it’s Facebook or even the internet before it, we always look to the more optimistic version when we start out, which is everybody in the world can be connected to everybody else in the world. But we never ask what happens when the worst people in the world are connected to each other. That might be terrorists or hackers or it could be a nation that, you know, is based on authoritarianism.

So we never really anticipate what the downside of everybody being connected together is, which is essentially the internet brought all of us together but social media can be used to divide us all apart. So the things like democracy, rather than being social capital bridging us together, becomes what we call bonding in our vertical, which means we reinforce our beliefs and we start to fight against each other. I don’t think anyone really anticipated that, but in a preference world, why engage with somebody who’s saying something you don’t want to hear or you don’t like? You’ll always choose what you want to hear and what you do like, and this creates a very divided world rather than a unified one despite the fact that we’re all online together.

And you actually think that it’s creating or exacerbating that? Divisions in our society?

There’s always divisions in our society, but in the online world it actually exacerbates them because you don’t have physical conversations. You can’t really move over the divide. You or I as Americans might come and talk to each other and we quickly realize we have more in common than differences, but online all we know is what our differences are and we may not realize what we have in common. It’s much different in terms of how we filter the world and so it helps divide us apart rather than bring us together.

Do we want just an explanation [of] why you saw the seeds of the Russian efforts in the Arab Spring? You said that it’s a theory of yours.

Oh yeah, yeah. Probably [Russian General Valery] Gerasimov. He talked about …

Oh, OK. The Gerasimov doctrine. OK.

Yeah, yeah. The Arab Spring was groundbreaking for many reasons. We thought, you know, it’s a way for people who are being suppressed to reach out to the international community and topple dictators. But from Russia’s perspective, it gave them a different perception, which was this is a way to use the asymmetric advantages essentially. And General Gerasimov talked about this, others in Russia talked about this, which is: How we can use information as a weapon to really change people’s perceptions? How do we mobilize people online that we could usually not reach but now can with a click? And so they were able to take that idea and use it to essentially break unions up, to align with audiences that think like them about nationalism or anti-immigration, anti-NATO, anti-EU. So they saw it as a weapon and an opportunity, not essentially for democracy, but for authoritarianism.

And for sowing division.

Yes. If you want to reach into the heart of your adversary, there’s no greater tool for this than social media.

originally posted on pbs.org