The Facebook Dilemma | Interview Of Sam Woolley: Computational Propaganda Project

The Facebook Dilemma | Interview Of Sam Woolley: Computational Propaganda Project
The Facebook Dilemma | Interview Of Sam Woolley: Computational Propaganda Project

Sam Woolley is the director of research for the Computational Propaganda Project. This is the transcript of an interview with Frontline’s James Jacoby conducted on March 1, 2018. It has been edited in parts for clarity and length.

So let’s start at the beginning. How did you get involved in this work? What was the impetus?

Sure. So in 2013, I joined the Ph.D. cohort at the University of Washington in the Department of Communication to study … generally speaking, I was interested in media literacy and critical thinking online. And so I was worried that, you know, there was a lot of content that flowed online that maybe wasn’t as quality as we would hope.

And at the same time, I had previously worked as an organizing fellow on the Obama campaign. So I had had an interest in technology and an interest in how technology facilitated political communication and the ability of campaigns to win or lose an election, it seemed like. And one of the things that we noticed during the Obama campaign or that I’d noticed during the Obama campaign, but also during the Arab Spring and during Occupy Wall Street and other events, that were going on was that there was a lot of noise on social media across Facebook and Twitter, and that there was also a lot of suspect accounts.

And I didn’t know a lot about what those accounts were. I thought that maybe it was just people creating fake accounts to sell stuff. But as I looked more into it I realized that there was some kind of political manipulation going on and I hadn’t done any tracking. There was no sophisticated research by me at that time. But I went to my advisor Phil Howard who is now at the University of Oxford. … So I go to Phil, I say, “I noticed you’ve written a couple articles. I didn’t come to the University of Washington to work with you.” But I was in a class with Phil at the time. “But I’m really interested in studying this bot thing. What do you think?” And Phil said sure, let’s talk about what that looks like.

How Political Manipulation Works

What is the focus of the study? What was the focus of the grant?

The grant was called computational propaganda. So it was a word that Phil and I came up with to describe what we were seeing. And we thought we’re going to look at the way that automation and algorithms get used or manipulated to try to sway public opinion.

And this was something that hadn’t been studied before?

It hadn’t been studied by social scientists. There’d been some research in 2010 and 2012 by some computer scientists who had studied what they called online AstroTurfing. So people had, computer scientists had begun to be aware that there was noise and everyone had known that bots played an integral role on Twitter and Facebook. But they didn’t quite know how they were used in manipulation. And so we came at it in the social sciences and really decided that we were going to look at how bots affected public opinion; how algorithms like the trending algorithm on Twitter or the News Feed on Facebook were manipulated from the back side of things.

And back side means what? Are you studying how people are deliberately manipulating?

Yes. So in the beginning our focus was on how powerful political actors – so politicians, governments, militaries – were using social media to purposefully manipulate public opinion in attempts to change the vote, in attempts to quash a story or prevent activists from organizing using social media.

These are pretty scary scenarios.

Right. And at the time, we had a few examples. So we had been following what had happened in Mexico the year prior in 2012, and we’d seen massive usage of social media and attempts to both silence political opposition of the mainstream parties and also from the activists in attempts to smear the powers that be. And so we were like, wow, this is kind of crazy in some ways. Not just bots but also just usage of social media is amplifying the ability of groups to sort of take over the conversation or to attack one another, to harass journalists. And so very quickly we realized we were onto something, and that our assumption that this was going to happen in other countries was actually bearing out.

Phil kind of left me to my own devices in a lot of ways. I was project manager of the project from its inception. And at the time, Phil was writing a book called “Pax Technica” on the internet of things so it was sort of related. But I was out there interviewing people. So the goal of a lot of this research … I should mention from the very beginning, my interest was to talk to the people who made and built the technology that got used for manipulation. So I wanted to talk to political bot makers. I wanted to talk to people who were figuring out ways to manipulate trending algorithms by using numbers and by sort of just hacking your gaming.

And it took a long time to get in contact with people, like months and months. But once we talked to one or two hackers or one or two people that built bots there was a snowball effect. So we started getting more and more and more people wanting to talk to us. And we started realizing, wow, this is a thing and this has been used. Computational propaganda – which we define as the use of automation and algorithms in the manipulation of public opinion – is a thing in multiple countries around the world. And governments, parties and a variety of other actors are actually using this stuff to manipulate public opinion actively and we should be concerned.

And this is at a time when social media companies like Facebook in some cases are the primary source of information?

Yeah. I mean, we know that the penetration rate of sites like Facebook is extremely high in America. We also know that a lot of people get their news on Facebook. Pew [Research] studies have shown us that a number of Americans, a high percentage, get and take news on Facebook. So also the other thing that we knew was that teenagers develop their political identity online. So these young people who are quite impressionable are getting a lot of resources for how they understand American politics through the internet. There’s lots of older people online over the age of 60 who are not native technology users, not native internet users, who don’t understand what fake content looks like really.

… The concern wasn’t necessarily that this was changing how people voted. The concern was that this was changing public opinion. … And the other thing that we were worried about was that this computational propaganda, this digital disinformation, was confusing people about how politics occurred, what actually had happened and it wasn’t just in the United States. It was a global problem. We saw it in multiple countries.

Presumably if you’re recognizing this as a problem going back to 2013, was this on the radar screen of a company like Facebook as a problem?

The simple answer is no. The social media companies were very wary of any accusations that their platforms were being leveraged for political harm or for manipulation. We reached out many times to the companies over the course of our work and we very rarely got a response. And when we did get a response, it was sort of like: OK, but I think this is overblown. My perspective is that there were several things at play here. So there was, there was a wide-eyed optimism about the beneficial uses of technology which was born out of like the cyberlibertarian idea that the internet was going to be an open and free space where people could talk about politics, could build their own news, could do all of these different things. There was many-to-many communication.

But what wasn’t considered is what we academics call political normalization. And that means that the powers that be – governments, militaries – will eventually figure out a way to use social media to control. Because that’s what a lot of times governance is about. It depends on the system you’re in. But across the board, we saw democracies, we saw authoritarian countries, we saw emerging democracies, and a number of countries in between making use of social media as a means to control.

The other thing with Facebook is that they had scaled so fast. Right? So when they started out, they didn’t really know what their monetization scheme was. They didn’t really know where they would end up. And it went from a dorm room to suddenly having millions and millions of users. It went from having just college participants to suddenly having open participants across the world. And where we are now, they’re sitting, you know,just under 2 billion users. And I think that they didn’t realize that there was ways that their site, their platform, could be tampered with to control the way that people think, to confuse people and to change public opinion.

Deliberately.

Deliberately. The thing is not just Facebook but all of the social media companies, all of the usual suspects, the big ones, discounted the political use of their platforms. There’s been this perception that continues to this day that the social media companies are technology companies; that they aren’t media companies. And you’ll hear this said by everyone from the top to the bottom at these companies. And [Facebook founder and CEO Mark] Zuckerberg and others have said, “We don’t want to be the arbiters of truth.”

There is this kind of like catchline where they say, “We don’t want to moderate content.” I think that it’s really important to note that since the beginning and especially now, all of these companies and Facebook included moderate content. They curate content. What you see in the News Feed is curated via an algorithm and other inputs and outputs. So there is responsibility upon the shoulders of Facebook and of other social media companies for what appears on their platform.

And the idea that they’re not mediating content, that they’re not prioritizing stuff is insane. There’s trends on the side of Facebook. There’s trends on the side of Twitter. Trends are things that are amalgamated based upon the number of views, clicks, likes. “This thing is popular” is what the site is telling you. But what happens when you have one person using 10,000 bots to game that system so that they create a trend? And what happens when that trend is created around a political situation? That’s worrying.

Facebook’s Reaction To Warnings

And so I mean, going back to 2013, and I’m just kind of curious about the initial days of your research and the responses from companies like Facebook because I mean, you said that they were kind of wary. Right?

In the very beginning, we were a nonentity. It took three years for people to start paying attention to the research. We put out a ton of different research on a ton of different country cases, looking at social media from all different sorts of angles. And no one was really paying attention.

My collaborator went and gave talks at social media companies. I was in correspondence with people that were working at them. And we were very public with our work. That’s the other thing, like aside from direct communication with the social media companies which was really challenging because no one really wanted to listen to us, and including a lot of people in the academy actually didn’t really want to listen to us. Aside from that, there was also, you know, the fact that we were writing magazine articles.

We were writing for organizations like Wired and Slate and Motherboard. And they were some of our early supporters that let us in and sort of say to the public, “Hey, here’s what our research says and here’s why you should be concerned. There is large-scale manipulation of political speech and of public life and of society going on via social media.” But even those things didn’t get that much traction. In fact, actually something that’s really interesting is that a lot of the time when the social media companies did respond to us, they responded with like takedowns. They responded with an attempt to discredit our research. And that happened a little bit later on more. But in the beginning, they attacked our methodology. The assumption was that the way that we identified bots was wrong. And perhaps some of those criticisms are correct, you know.

OK, so there was an assumption that similar methodologies in our research were flawed. And that assumption is probably correct in some ways because we were starting out and we didn’t really know, like, the best way to capture the data at the time but we definitely knew that automation and the use of computational propaganda more broadly was a problem. And we actually said in multiple, different meetings that I had where the social media companies were at the table, I said, “Look, if you think our methodology is flawed then come work with us on creating a better methodology because we want to collect this stuff. We know it’s a problem.” That, needless to say, never happened.

And were the, you know, for instance with Facebook, were they ever willing to share their data with you?

So none of the social media companies have ever been willing to share data with us. They’ve sort of mentioned that in their last six months as a possibility but it’s become very clear to us.

So there’s been an open question and an open call from academics and others in civil society to get the companies to share data on specific events so that we can do analysis. At the present, there is not a really easy way of accessing data from any of the sites. On Twitter, it’s a pay-for-play system. On Facebook, it doesn’t happen. On YouTube, it doesn’t happen. And so in the beginning, when we first started doing this research, we went where we could get data and that was Twitter. But we were also really interested in getting data from Facebook and from YouTube and from these other sites. And our research was out there. It was public, we were presenting on it. We were giving talks on it.

And never, until really 2016, did the question of getting data from these companies really arise. They never shared data with us. There was never a move to say, “Your research is interesting. We’re concerned. We think that you should work with us on this problem.”

Let me ask you this. Who would be the team at, say, Facebook that should be interested in your research?

So in 2013, it was really unclear who the team at Facebook would be that we would have worked with or who the team at other social media firms would have been. … It became clearer around 2016, during the election, that we maybe could work with the security team or that we could work with the user experience research team that study actually how people use the platform. But it was still sort of fragmented. And again, you have to remember that these companies scaled so fast that oftentimes at the companies my impression was that the right hand didn’t know what the left was doing. There was simultaneous work going on in multiple domains. And so it wasn’t clear to us who we would have collaborated with and it wasn’t even clear to them in some ways. …More recently, there’s been discussion of sharing data and of the ways that the social media companies could play ball with third-party researchers or with civil society in sussing out this information and sharing data just broadly speaking. But it’s become really clear that they are not willing to share streaming data. And that means that as an event is going on the companies aren’t going to share data with us [in] real time. That’s problematic because they can share data with us after the fact. But how do we know that it’s not scrubbed? How do we know what they’ve done to it?

There’s got to be more sort of culpability here. And another point: Research that’s been done with Facebook in the past where they’ve collaborated with external researchers at other institutions has been on a very piecemeal basis. So there’s no really standing set of rules for who gets to do it and when. I was actually at a meeting with Facebook and some other people where one of the researchers said very candidly, “If you want to do research with Facebook, you need to know someone at Facebook and they need to sponsor your work.” There’s this open secret. Maybe it’s not even a secret that if you want, as an academic, to get data from Facebook or to do work with Facebook, that you actually need to know them.

And so I mean, what’s that all about? You know, how is anyone that doesn’t have the deep contacts there going to do work? And if they do have the deep contacts at Facebook, would you trust their work?

If you had access to Facebook’s data, how would that help you understand what you do? I mean what would that help you do?

So one point of qualification here. I am a qualitative researcher so the work that I do is focused on the people that use and build these technologies.I don’t crunch numbers. I am someone that sits down and actually does what we’re doing. I interview people. I collect data through content analysis. I read lots of articles. I do what one academic, Clifford Geertz, called deep hanging out. So I spend time with these folks in order to get a deep understanding of what’s going on.

… We’re attempting to not go into our research with presuppositions about the fact that manipulation is occurring. In fact, like most of the time when we go into these situations, maybe we’re going to spend a week analyzing this data and nothing’s going to turn up and that’s OK. In fact, because we are concerned about political speech and concerned about public life, we hope that it doesn’t happen. But when it does we have a vested interest in letting people know.

If we had access to the data on Facebook, we’d be able to see who was using the platform, how they were using it, what they were talking about, and whether or not computational propaganda was going on on the platform. What I mean by that is: Were different groups manipulating public opinion? How are they doing that? Were there suspect profiles being used to drive conversation? Were group pages being manipulated? Was there a clear indication of automated content based upon the time stamps of when it was posted? Like were things being posted over and over and over again? Were different groups being pitted against each other in a way that looked artificial?

I mean, you name it. If we had the data we could really begin to figure things out. But the thing is, it’s really hard for me to talk about, like, what we can do if we had the data because we’ve never been given access to the data. So like, data is just information on what’s going on during these events and it could tell us any number of things. But that data’s not public. It is black-boxed most of the time.

When you were doing your initial research, it would seem that Facebook would have no incentive to want to know that they were being gamed. Right? I mean if that’s what the data was going to show them, it would establish some sort of knowledge that their platform was being gamed by people who sought to manipulate things politically.

Right. And the problem with any company admitting that they’ve been gamed or been hacked or been misused is that that gets to real money very quickly. Advertisers do not like it when there’s automated content on a platform. They don’t like it when the platform is being used for racism or for political attacks. And so one of the reasons I suspect that the social media companies didn’t want to collaborate with us is that they wanted to deal with this thing internally.

They wanted to figure out what the bugs were and to fix them. But part of the problem was that they had approached the building of this tool with the conception that the best ideas would rise to the top – that they had built these algorithms that prioritized great information. But I think that they didn’t take enough time to think about the fact that the tools could be misused almost using the same exact avenues.

Another thing is that there’s long been an ethos at these companies of “Move Fast and Break Things.” That’s said at Facebook all the time. People that know the work there say that. So there’s a perception that you should design, build and launch. And then research. And that’s backwards. I’m here to tell you, we need more slow tech. We need more people researching what the social effects might be of a tool, talking to the people who are going to use the tool, and then launching it. As a Facebook user, I can tell you that Facebook has changed rapidly over the course of the last seven or so years. And suddenly you’ll get on and things will just be completely different.

No one ever talks to the users really about what that looks like. Verifiably, like Facebook had very few UX researchers five years ago. Now they have over 300, but then they had under 10. And so that suggests to me that there was a problem of hubris and of expediency and of scale. So the goal from the beginning was: Let’s scale. Let’s get as many people using this platform as possible and let’s figure out how to make a bunch of money.

I don’t mean to harp on this but it is astonishing to me that you’re out there investigating and finding people who are manipulating the platform for political ends and you’re really receiving a cold shoulder from the platforms themselves.

Right. It wasn’t like we were researching this from our bedrooms or our office at home. We were doing this research at the University of Washington and then Oxford University. And so we had funding from the National Science Foundation. We had funding from the European Research Council. We had funding from major foundations. The idea was that we knew something was going on. We had proved that something was going on to get the funding from governments to study this problem.

But we weren’t being paid attention to in the early days. And I understand that one of our jobs as the people who, as the team who coined the term computational propaganda and who came up with this term political bot, one of our jobs was actually a proof of concept. We had to prove that there was something going on here in society.

Computer scientists had done great work to show that automation played a role online in pushing political speech. They’ve shown that spam was a problem in and around politics online and on social media like Facebook. But what we went to do was to show that there was a large-scale social problem. This was a global issue that from country to country we were seeing this and that we could say in a comparative context: This instance looks a lot like this instance, and the manipulation is becoming more and more and more a problem.

How Political Manipulation Works

And just describe the kind of the very essence of what it was that you were discovering to be true.

At the heart of what we were seeing was that a variety of different groups were making use of bots – automated programs that get used to do things that a human would otherwise have to do online to mimic people. So using automation to mimic real people to amplify their positions online. So to make it look as if they were much more popular than they were. What this meant was that oftentimes fringe groups that were five or 10 people could make it look like they were 10 or 20,000 people on websites. They can also automate their communication. So even if they were just using one profile, they could make it so that they were able to message really, really quickly, or that they could send friend requests really, really quickly. And so they could build a movement actually of real people in an AstroTurf way.

The other thing that we are seeing, so we were seeing two kinds of manipulation of communication. One was on the front end of the sites. And what I mean by that was that it was occurring on Facebook pages, on Twitter pages, on YouTube. And it was sharing of articles that were bogus. So they had complete misinformation in them. They were aimed at smearing politicians and political campaigns. They were harassing; they were very odd. The other was that we saw a massive bolstering of people’s presence on social media. So they had a lot of fake followers and made them look more popular than they actually were. They gave the illusion of popularity, what I called manufacturing consensus. So like kind of projecting you onto the stage when in fact no one actually cared about what you did, and the bandwagon effect that we saw during this time was a real thing. So we saw people that weren’t really viable political candidates or ideas that had not been in the public discussion suddenly begin to get viability because of automated bolstering and also because of these sort of AstroTurf movements.

There was also manipulation going on on the back end of the websites. So there’s a misconception that bots only get used to manipulate you and I and our friends on the front end of websites by sharing like articles or fake news or by bolstering people’s likes. That’s wrong. Bots also get used to converse with the algorithms of these sites. So what an algorithm does is, it’s just a “if this, then that” procedural piece of math. And it says if there’s this many numbers, then this goes to a trend. It’s much more complicated than that, I’m sure in many circumstances.

But if you use bots and fake traffic to drive up an issue, then oftentimes the algorithm will prioritize that higher than it would if there was no traffic around that. So the idea here is that computational propaganda gave the illusion of popularity to politicians, political events around the world. The other thing that happened was that the propaganda we were seeing on social media weren’t just used to amplify people’s positions or amplify politics. They were also used to suppress politics.

So imagine what happens when you’re able to buy an army of 10,000 bots or to hire tons of people and to suddenly unleash them onto the public and to attack journalists and to attack women. Or to spam a hash tag so that activists who are using that hashtag can’t use it to coordinate. Or to invade a group page with racial slurs. All of these things were things that we were seeing in Turkey, Ecuador, South Korea, the United States, the U.K., throughout South America. And we thought wow, OK, these social media systems have great democratic uses. They can be really useful in allowing activists to organize in allowing protest. But they’re also very readily gamed by people that want to use them for control.

I want to do some of the case studies and go through some of the specifics. But in a general sense, you were going out and actually talking to the people that were gaming the system.

Yeah, that was the goal. So in the beginning I was talking to all sorts of different kinds of bot makers. So I was talking to bot makers who were building bots for positive social uses – so people who were using bots to critique or to make comedy or all sorts of things like this. And what happened is that I realized that in some ways the bot-making community was quite small, especially in the States and Britain. And they were able to introduce me to other folks or just to point me in the right direction to say, “Oh, you should check out this company that builds social bots on behalf of users that seems to be a bit opaque in how it operates.” Or they’d say to me, “You should look into advertising online and the ways that advertisers make their clients look more popular than they actually are.”

And one of the things that was really clear to me was that there was a lot of veiled promises online about how social media marketing happened. …What I realized very quickly was that advertisers are using bots. They’re using all of these different tools on social media to make their clients look more popular and this has been translated into politics. And this has been translated into our social lives. So what we saw there was a transition from basically spam on behalf of products to spam and noise and manipulation on behalf of politicians and political ideas.

And also falsehoods.

Absolutely. So anyone who’s ever received a spam email knows that spam emails are laden with falsehoods. But we have a really clear mechanism for getting rid of those things in our email inboxes. We do not have a clear mechanism for getting rid of those things on social media. So a lot of the messages that we were seeing being spread over social media about politics that we had identified as pretty clear propaganda … were riddled with falsehoods.

And when you went out and spoke to people that were actually making the political propaganda and propagating it online, were these people describing the ease with which it was done or how difficult it was to spread things in this ecosystem?

Yeah. A lot of people sort of laughed about how easy it was for them to manipulate social media. And obviously, this was the purview of people who had computer coding knowledge or who were in some way, shape or form internet natives that had figured out how to use different tools to amplify themselves or to amplify their ideas or to just game the system. Because it wasn’t just bots that were being used. It was also human armies. It was also just one person that figured out how to get around the system.

I think back to one interview with a person that I talked to who said to me that “We aren’t going to let the media set our agenda. We’re going to use social media to set the agenda of the media. So we’re going to make things look so popular. We’re going to drive traffic around this issue so much that the media can’t resist covering it.” And so the idea there from these folks was just getting covered as a win because previously our ideas hadn’t even been in the mainstream. There’s this example of when Pepe the Frog got made a hate symbol by the Anti-Defamation League. And the wide consensus amongst the “alt-right” was that that was a win because previously your mom and dad my mom and dad and everyone else didn’t know what the heck Pepe the Frog was. But suddenly he had launched onto the national consciousness.

So what you’re starting to see here is that what the bot builders have quickly realized was that social media was a main source of information for reporters. It was a main source of information for politicians and that if they could manipulate social media, they being the bot makers, then they could get politicians and journalists to regurgitate what they were saying and doing.

It’s crazy. Was that person an American?

… So that person was an American who had worked for various political campaigns in the United States and they were really upfront with me about the fact that they treated digital marketing for politics like the Wild West. They used the term Wild West multiple different times. They said we can get away with anything online. They made it actually really clear to me that the FEC decided in the early ’90s, the Federal Election Commission, that they weren’t going to moderate speech online during elections. And so why wouldn’t political marketers take advantage of the vast array of resources that they had at their fingertips to try to get people to vote the way they wanted them to? Why wouldn’t they try to push the conversation?

Actually, another fellow who I talked to who is really well-placed in one of the American political parties told me that he was well aware that political campaigns since 2010 in the United States had been using bots to bolster popularity of their candidates. He told me that he didn’t think it was a big deal. He said, “I don’t think it really causes much harm. I don’t really think that it’s … I think it’s a nonissue.”

What he didn’t understand though and I tried to make this point to him was, “But hey, you’re making these people look way more popular than they actually are. And that has a very real effect. The research shows that when people look more popular than they are online, that real humans gravitate to them very quickly and they start listening to them.”

Weaponization Of Information

I mean, there’s [all] sorts of ways to parse this. But obviously, one of the most relevant things today is this issue of weaponization. Right? The weaponization of information and either authoritarian regimes or regimes generally using information and propaganda campaigns to sow discord in democracies, to obfuscate the truth and things like that. When do you kind of trace the beginning of using social media as a tool, as a weapon?

I think that you can trace the use of social media as a weapon back to the early days of Facebook and Twitter and YouTube. There was a small, select group of people who realized that you could leverage these media systems to attack. And that scaled over time. The … attempts to weaponize social media have their roots in what we saw during the Cold War and what we saw before. It’s just that they were scaled.

So you could use automation to scale your efforts to weaponize the platform. You could use anonymity to hide your tracks. And those two things as much as anything allowed for more successful weaponization of social media, despite the fact that Facebook now has a real name policy. There was plenty of people who were figuring out ways to game that system. There was bot makers that I was talking to that said I can run up to 10 accounts on Facebook at any given time and use them to manipulate people. Or I can build group pages on Facebook and then recruit real people to them and then set them against each other.

The idea was that social media, like other kinds of media, were useful in spreading ideas, telling people what to think about. But almost anyone could tell people what to think about with social media. Whereas before with TV or with radio, we had to worry about potentially the government having a role in manipulation in authoritarian countries. But now what happened with social media was that it was anyone’s game. That your next-door neighbor could be using Twitter to game public opinion.

OK, let’s talk about the case studies a little bit. I mean, 2012, there’s Mexico. You guys didn’t produce a report on Mexico. Right?

No, but we did study it. Actually, we might have produced a report. I wasn’t one of the authors but I think our project might have produced it.

I’d love to know because something seems like kind of a democracy-threatening example. So yeah, bring me through what happened in Mexico.

Two of our earliest focuses were upon Venezuela and Mexico. So we wanted to look at Latin American countries and how they were being, how the populace in these countries was being manipulated using social media. We had heard a lot from Mexico. We knew that there had been a recent election there when we started our project and we had heard from a lot of people that there was widespread manipulation going on over social media.

And so what we started to do was reach out to experts and talk to people that we knew that studied in Mexico and that studied social media in Mexico. And we realized that there was something really suspect going on with social media during the 2012 election. … Not only were social media being used to game public opinion to spread positive messages about the government or about [a] particular political party, but they were also being used to attack activists, to attack various different groups or to silence them.

And so Mexico was an early case of very sophisticated computational propaganda and has continued to be a place where propaganda has been spread online with quite a large degree of success. And the result oftentimes isn’t necessarily to change the way that people have voted but it’s to scare people; so to get them to leave the online space. And a lot of those people oftentimes are journalists, or to silence people and to not give them a space to speak. The Mexican use of computational propaganda in and around politics there has been largely focused on weaponization. It’s been largely focused on using social media as tools to target specific communities.

Social media are purpose-built to allow the people that use advertising on those platforms to reach out to the communities they want to reach out to with a high degree of specificity. So my mom’s a real estate agent. She can use social media to find the people in San Diego that she wants to reach out to. Politicians and political parties can also use social media, actually even through legitimate means, to reach out to the right people, but so can people that want to manipulate.

Well, I mean, isn’t advertising manipulation? How do you distinguish between what’s advertising and what’s propaganda?

That’s a really good question. So oftentimes people say what is propaganda to me, or like what’s the difference between ads and propaganda. And I think the clear distinction is that propaganda occurs during pivotal electoral events, during security crises. And the intent of propaganda is to manipulate public opinion in a way that undermines democracy. Advertisements are trying mostly to get you to buy stuff. Political advertising oftentimes becomes blurry when it comes to propaganda versus advertising, because political advertising is trying to get you to buy into a candidate.

And when political advertising is aboveboard. When political advertising is on TV and you see a commercial and it says bad things about Hillary Clinton or bad things about Donald Trump, you know it for what it is. It’s above the board. Like OK, that was a little bit of a salacious commercial. But when you don’t know that you’re being manipulated, when it’s anonymous, when it’s automated, that’s worse. Like that’s, that’s a problem.

Disinformation And Misinformation

And what about lies? What about falsehoods? I mean the spreading of falsehoods to kind of undermine trust and understanding, especially when these media like Facebook become the primary source of information?

So one of the things we worried about early on was that because anyone could share information, it gave the illusion of democratic speech on social media. But in fact, there was a lot of powerful political groups that were using social media to circulate memes or to circulate articles that were patently false. And people previously didn’t really have access to that stuff. We know that people are given to listening to others that have similar political ideals to them. But we also know that people love conspiracy. And they love falsehoods.

And so it’s really worrisome that social media have been leveraged in a way that allows for the spread of disinformation, which means purposefully false information built with the intent to trick people. One area that we’ve seen a lot of falsehoods be spread around is climate change. So there is a lot of bogus information, a lot of junk science, a lot of bad statistics that go around online about climate change. That’s one thing. But there’s also a lot of bogus information, a lot of falsehoods that go around about elections now. Social media in some ways have allowed the use of lies and falsehoods to become an important part of politics worldwide.

And what case study of yours kind of points to that? Should we go to Ukraine?

Let’s go to Turkey.

Let’s go to Turkey.

One of the things we did early on was travel. So we knew that we wanted to go to different places where we saw computational propaganda happening. And I decided that Turkey was the nexus of a lot of propaganda online. We had [known] that [Recep Tayyip] Erdogan, the leader of Turkey, was making fairly open use of social media in attempts to spread his own line and also in attempts to spread falsehoods and bolster his party through automation. There was a lot of early articles actually about how Erdogan and his party in Turkey were openly using a bot army to circulate pro-Erdogan messaging.

And so I went to Turkey a few times and I actually spoke to a couple of people in Turkey. One was identified to me as the main guy building bots on behalf of the opposition. And the other was identified to me as the main person building bots on behalf of Erdogan’s party and Erdogan. And so I reached out to these people and I talked to them. And it became very clear at the time that the usage of propaganda online, the usage of bots and of spreading falsehoods, was seen as being pretty legitimate, like a legitimate use of social media and a way to try to control dialogue.

There was usage of social media in Turkey to attack the Kurds, to try to belittle a minority that we know has a long history of being persecuted, but also in an attempt to gain an upper hand in politics. And I think that the situation in Turkey has just become worse and worse. Like not only did we see attempts to spread propaganda online and attempts to trick people online, but we also saw corresponding arrests offline of journalists who were attempting to use social media to speak out; of prominent opposition political party members who were attempting to communicate using social media. So this showed to us Turkey as an early example that there was a sophisticated means of using social media to control the populace, to exert what increasingly looks like very authoritarian power.

And what about the use of social media to exert influence, for one country to use it as a weapon on another country?

Sure. So social media in and of itself is a transnational thing. So Facebook transcends the boundaries that we think of as bracketing the places we live. One of the earliest cases of transnational manipulation was between Russia and the Ukraine, and actually Russia and the rest of Europe, when the Malaysian airliner was shot down. So in the summer of 2014, an airliner was shot down above Ukrainian airspace and there was a lot of mis- and disinformation that was spread about whether or not the Russians had shot it down, whether the Ukrainians had shot it down, or whether something else had happened. That was one of the pivotal events in using computational propaganda and using bots and using social media and using human armies in an attempt to manipulate the dialogue surrounding a particular political crisis.

And what ended up happening and what the Russian motives turned out to be in that was to sow confusion as much as anything. It wasn’t to deny responsibility. It was to create an alternative story about what happened; not just one alternate story in fact – 10 or 20 alternative stories about what had happened. And then also during that same time, lots of different prominent newspapers around the world were writing about the airliner being shot down. One of them was The Guardian. And at the time, in the comments section in The Guardian, the editor started seeing attacks.

Anytime that they wrote an article about Russia or about the airliner, they noticed that there was lots of bots and lots of people that were promoting pro-Kremlin lines that were attacking anyone that spoke out against Russia. And in fact, the editor wrote a letter to the readers saying that: We’re seeing a ton of propaganda going on below the line on our articles and this is really worrisome. …

Not only was Russia trying to control how people were speaking about this incident in the Ukraine or in Russia, they were trying to control this around the globe. They were trying to paint a picture very different from reality about what had happened. And they were also attempting to manipulate media organizations and people using [the] newspaper comment section to have discussions about this. So you can see that the role of social media in manipulating different publics was not bounded by a state or by a country. It was actually happening real time from one country to the next.

… Alot of the time there’s a very small lack of clarity about who is promulgating the attack, like who’s behind it. And the Russians and other countries and militaries have used this to their advantage in a huge way, because what that means is that you can use social media to attack any variety of people in any variety of countries who are speaking out against what you want them to hear. And the problem is that you can do that anonymously. You can do it anonymously. No one’s ever going to be able track you. You can use VPNs [virtual private networks], which are a means of hiding your actual IP [Internet Protocol] address so they hide where you’re at. So if you’re in St. Petersburg and you’re targeting a bunch people in England, you use a VPN and it says that you’re in Pensacola, Florida. And so suddenly, whoever is trying to track you and figure out what’s going on, can’t.

Facebook And Ukraine

What it sounds like is that in Ukraine, the Russians had a good playbook. Right? They knew how to harness the power of this medium. To your knowledge, were the social media platforms like Facebook used? And were they aware of the problem that they were being used to basically spread lies and obfuscate the truth about what was a major crisis at the time?

There’s no way that the social media companies weren’t aware of the fact that their product was being used to manipulate public opinion or to manipulate people in places like the Ukraine. … If the social media companies were ignorant to the manipulation that was going on in Ukraine and actually continues today, then they weren’t doing their job. They were being lax because they built a multinational product. This isn’t just an American product.

The problem was and is that it wasn’t until it was in their own backyard that we saw manipulation during the U.S. election from Russia, that the social media companies actually started to show any kind of concern. And people like us, my research team and a number of other research teams, by that point in 2016, had been really clear that the companies and the platforms were being used for manipulation. And the response, generally speaking, was, “This is an isolated incident. And you know we’re not going to deal with this now.”

The other thing that was a worry was that the social media companies had again and again and again taken the line of “We’re not the arbiters of truth. If our platforms are being misused, it’s not our fault and it’s not our problem.” I mean, I would argue that as companies that curate information, as companies that supplanted traditional media organizations and journalists in a huge way, that they absolutely had a role in managing the way that their platforms were used.

Facebook’s Reaction To Warnings

Who at Facebook, for instance, were you talking to and how high up did it go – this awareness that their platform was being used for political manipulation and obfuscating the truth during political crises like in Ukraine?

So for the first several years of my research project, I wasn’t able to talk to very many people at Facebook or Twitter. They weren’t really receptive to that. Also, I just didn’t have the gravitas to reach out to them. We were publishing openly. I was writing articles for major U.S. publications. But I had a lack of contact at Facebook. … By the time the U.S. election was in full swing and we knew that some fishy things were going on, our project was able to be in contact with people that were fairly high up at the companies. After the U.S. election and after Brexit, I found myself regularly in meetings with people from Facebook at all variety of levels, from quite high up to quite low level. And that’s because the media turned against the social media companies, because the public realized that they’d been manipulated, and because it became really clear that a foreign government had tried to interfere in our election.

I’m just trying to think of what other lessons [were] learned from Ukraine. Are there specific people that you want to talk about in your case study that you spoke to?

So Ukraine is probably the most advanced case of computational propaganda in the world. And what I mean by that is that the manipulation of social media in Ukraine is more sophisticated, is more widespread and more problematic in a lot of ways than it is anywhere else. Ukraine was used as a testing site for propaganda by Russia. And so that meant that a lot of their public forums, which Facebook is for them, were full of noise and spam and harassment.

One particular group in Ukraine has done amazing work around automation and around propaganda on social media. They’re called StopFake and they are a coalition of journalists, academics and others that have come together to say we need to stop the misuse of social media, especially for the purposes of Russian propaganda in our country. And a big part of their platform and a big part of their project has just been to generate awareness.

… Even in a country like Ukraine, lots of people didn’t know and still don’t know that social media were being manipulated. And the thing that’s important to recognize is that even if a person is not on social media, they can still be tangentially manipulated because their friends and family are. And so Ukraine was a case where questions about Crimea became very unclear because of the amount of propaganda that was circulating online.

And I had actually spent some time in Central Europe, Central Eastern Europe during the events we’re discussing, so when the airliner was shot down and just after, and when the Crimea crisis was going on. And I’d spoken to a number of people who identified themselves as sort of like anarchist hackers who had spent time in Ukraine. And actually, at first they were all really fearful of speaking to me because they were like: Is this guy a CIA agent? Why does he want to speak to us about how manipulation was occurring?

But once I started going deeper with them, they said the extent to which manipulation was happening in and around the Crimea crisis was insane. And a lot of it was bearing out online. People didn’t know what to believe. People didn’t know who to believe. And also, people were just really insecure and they felt unsafe because there was a lot of harassment that was targeted and political goings-on surrounding that event. And the thing about Ukraine is that this continues today. It’s really easy for us to forget that with our worry about Russian tampering in the U.S. election, places like Ukraine, places like Mexico, are on the front lines of this stuff; that they are facing way more sophisticated propaganda than we’ve seen in our country.

Facebook’s Responsibility

And it’s one thing to hold the people that are spreading the propaganda accountable. It’s another to hold accountable the vector for that propaganda, the forum for that propaganda which are the companies like Facebook. Right? Fine, you can say the Russians have gotten really good at this. But to what extent do you see an accountability issue for the places that have essentially become a public square?

Right. So Facebook is a tool. It is a technology. But it is also a means for communication. Without Facebook, groups that want to manipulate public opinion online wouldn’t be able to do it so effectively. Facebook allows a conduit for manipulation. Whether or not they want it to be that, it has become that. And so I think what’s really important to note is that we have laws in the United States and other countries have laws around the globe about how media use happens, about who can advertise on media, about what you can say on media. But there’s no such rules online really unless you’re a traditional newspaper that’s gone online. So if you’re CNN or the Times or PBS, then you can’t say specific things in your articles online.

The crazy thing is that Facebook, because it amalgamates content from millions, now billions of people around the world, has taken a stance that that’s not their problem. There is a – this is a little bit wonkish – but there’s a part of the Communications Decency Act [Section] 230. 230 was written in the mid-’90s with the intent to say that companies that launch sites on this new thing were not responsible for the content that appeared on their sites. So if users put bad things on your site, you weren’t going to be sued. And the other thing was that sites could take down that content without infringing upon free speech.

The perception that Facebook has created and that Twitter and YouTube have created is that Section 230 of the Communications Decency Act prevents them from being arbiters of truth, but in fact it allows them to stop misuse on their platforms.

230 is a law that allows companies to take down bad content. The point I’m trying to make here is that it’s a political choice by the companies  to not take down content and to not moderate. I know that there’s certain things that Facebook can get behind moderating. So if it’s terroristic content, that’s a no-brainer. But the question is: What is terroristic content, what does that look like, and how do you identify that? You are making a judgment call at any time you take down content that you see as being terror-related just like you’re making a judgment call any time you take down content that’s related to hate speech or misogyny.

So the companies are absolutely culpable  for at least a portion of what goes on their sites. They won’t be able to escape regulation for much longer. It’s just that they’ve been so immensely profitable, so immensely exciting, and they’re something completely new.

Is there any indication that anyone in Ukraine contacted Facebook, spoke to Facebook about the fact that the site was a main conduit for misinformation?

People that I spoke to at StopFake, that main group in Ukraine that was attempting to combat Russian misinformation were really open with the fact that they had tried to reach out to a number of different companies and groups about the problem of propaganda in their country. And it’s not just Ukraine. It’s also other countries. I spoke to people in Turkey that said that they had tried to reach out to the platforms. I’d spoken to people in Mexico that had said the same thing. The problem was there was no clear means of communicating with the companies. Like how were you able to get in contact with someone and actually make a complaint? Were you going to flag a comment and say that this was abusive? This was a systematic problem.

And so if you wanted to reach out to Facebook, you aren’t going to get on the phone with Mark Zuckerberg. You were probably going to get on the phone with someone that was doing content moderation. You actually weren’t getting on the phone. Someone in India or some other country, maybe the United States, was going to look at that comment and either delete it or not delete it. But there was no systematic approach to dealing with the problem of manipulation and propaganda on the platform.

In Ukraine and elsewhere globally, there were people reaching out to Facebook and other social media platforms saying there is a major problem with your platforms being polluted with misinformation and disinformation and propaganda?

Absolutely. So I mean, remember that the problem that we’re discussing didn’t begin in 2016. The problem that we’re discussing began far earlier. So manipulation was occurring on Facebook well before the U.S. presidential election and multiple different countries had contested elections, or they’d had a security crisis where social media had played a fundamental role in manipulating the public one way or the other. And there was people that were very vocal about how social media could prove to be a problem.

It may have been out in the open as an idea but you were gathering evidence that it was actually happening.

It may have been out in the open as an idea. But yes, we were gathering evidence that it was happening and we were gathering qualitative evidence. So we were talking to people. We interviewed over 80 different people who were building and making bots and had discussed how they had built them, how they’d launch them on Facebook, Twitter and other sites. And we were also doing a lot of quantitative analysis of how social media were being gamed.

I think that one thing that people identify as being an issue with our research is a cart-before-the-horse problem. Like people oftentimes come to me and they say, “A lot of your research has been [about] Twitter. Why?” And I say, “Because Twitter shares data.” If Facebook ever had shared data of any kind we would have done tons of research on Facebook by now, but Facebook doesn’t share data. So criticizing us for not researching Facebook is crazy to me because it’s like [on] Twitter we get 1 percent of the API [application programming interface]. We can download random samples of tweets and people actually at Twitter were more willing to talk to us than people at Facebook. It’s just like Facebook was completely closed off. Facebook did not share any information with users. Facebook has an API but the common saying by the bot makers I spoke to is that the API on Facebook is always changing.

… I had a conversation with a guy in England who was a bot builder who told me that he had eight or 10 different accounts on Facebook and said that those accounts were used to basically manipulate public opinion in different ways; and that Facebook had reached out to him on multiple occasions about multiple, different accounts, saying that the accounts shouldn’t be personal pages. They shouldn’t be a page that looked like me or him, but they should be made into group pages or made into business pages. And so it was almost as if Facebook was directing him to use bots on their platform, just not to use them for the sake of building personal pages.

And so what he said to me was that, “Well, since Facebook told me that I could use automation on group pages or business pages, I went to group pages and business pages and started building bots there or using bots to run those pages.” And the crazy thing about this is that communication on Facebook looks way different than it does on Twitter. So on Facebook we follow our friends and family. And so propaganda that comes from random accounts on Facebook that message us is really not very successful.

Propaganda that occurs on group pages on Facebook has a much higher likelihood of being successful because on group pages on Facebook you actually interact with people you don’t know. And you actually interact about issues, you interact about social issues, you interact about politics on these group pages. And so what we realized, what I realized when I heard that story was, uh-oh, group pages and others like them on Facebook are where the manipulation actually happens. Not only can you use automation and not only is Facebook directing people that they can use automation on group pages, but group pages are the place where we interact with people we don’t know on Facebook.

… But tell me why it’s problematic that one guy in the U.K. can create eight different group pages. Why is that problematic? I mean, can’t anyone just do that?

So it’s not problematic that the one person in the U.K. can create multiple group pages. What’s problematic is when a person, a political party or a government builds group pages with the intent to either manipulate people and manipulate conversation, or to gather data on people.

How Political Manipulation Works

Just skeptically talking here, I mean, isn’t the whole point of politics to manipulate people? I mean isn’t that what political communication is all about? Why is this any different?

Yeah. So there’s millions of jokes out there about manipulative politicians and about why politics is manipulative, and [that’s] sort of foundational in the way that people think about politics. But a lot of the time what we see as manipulation is quite clearly presented through whatever medium of choice you’re using, whether it’s TV or the radio or a politician speaking to you.

The difference with what I’m speaking about, the difference with computational propaganda, is that computational propaganda is oftentimes automated. So you’re able to massively scale your ability to manipulate. It’s that people don’t know that they’re being manipulated, so that in and of itself, it’s not like it’s subliminal advertising. It’s not like there’s hidden messages or something. It’s that social media presents itself as being a legitimate space for conversation. And when that’s being manipulated, then how can people ever know that what they’re trying to discuss, where they’re trying to speak about [it] is not being affected by the powers that be?

You can see why people are paranoid. You can see why conspiracy theories are popular because people know that the NSA [National Security Agency] had domestic wiretaps  [warrantless surveillance program]. People know that the online space now is being manipulated by Russians. And so there is kind of a cascade effect. In some senses the distrust in the media, the distrust in the government, has been exacerbated by powerful political actors’ use of social media to manipulate people. It’s like this cyclical problem that keeps happening and happening.

… Here you are, you’re studying the use of computational propaganda. You’re seeing how one government could use it on the people of another country. Was this on the intelligence community’s radar in this country? Did you ever have conversations with anybody from the intelligence community from the CIA, NSA, FBI about what you were learning?

No. I hadn’t had conversations with the intelligence community. We had gathered some data that suggested that the intelligence community knew what was going on. There was early indications that the intelligence community in this country were using something called persona management software. And persona management software is basically a fake profile online that’s made to look like a real person that has a presence online. So it has multiple, different profiles on multiple, different social media sites [where] you use what’s called a honeypot. So you use [it] to attract the people you want to catch.

So that persona that you’ve built operates on social media and can help to catch terrorists or it can help to catch criminals. … And so the point I’m trying to make is that the intelligence community was ahead of the curve in understanding that social media were just another forum for manipulation to occur and they were absolutely concerned with this.

Do you know whether there was any conversation between anyone in the intelligence community and the social media platforms like Facebook about the susceptibility of their systems?

No, I don’t. It wasn’t until recently that I became aware, like much of the American public, that there was a conversation going on between the intelligence community and the social media companies. But in some ways I suspect that there was not as open a communication as we would have hoped between those entities. It’s kind of a hard problem though because on the one hand, as citizens who value privacy, we don’t want the intelligence community to be interfering with social media. On the other hand, we don’t want to be manipulated by political actors and so we want the intelligence community to stop that manipulation and so it’s a challenge.

Technology And Democracy

I mean, your research was essentially raising an alarm that there is a threat to democracies all around the world of deliberate political manipulation, disinformation, propaganda campaigns. And it’s kind of surprising to me that no one either from the social media companies or the intelligence community comes to you and says, you know, “What are you learning? This could possibly be a threat to American democracy.”

Pretty much no one from the intelligence community or from the social media companies reached out to us prior to the U.S. election. We’d been doing the research for three, four years by the time this came in May of 2016, well before the election, we wrote an article in Wired that said social bots are threatening American democracy and threatening the election. And no one reached out to us then. We were publishing our papers openly. We were even reaching out to people in government to try to discuss this problem.

And it wasn’t until after the election that we started having meetings with people in government or with people at various levels of the intelligence apparatus. And to be really clear, my conversations with people in the U.S. government, my conversations with people in the intelligence system worldwide, have been really limited. And maybe that’s because there’s other researchers that are doing this stuff. But we have a ton of information to share.

There’s been some interest but, you know … It was concerning to us that what we were saying wasn’t being acted upon, or at least that we weren’t being consulted in any way, shape or form if it was being acted upon, because here’s one of the issues here. Like, we produced that research, we did the research. We did the analysis. We understood it back-to-front. And so even if they hadn’t contacted us and they were acting upon it, the worry that I would have is that they misread what we wrote, or they didn’t understand it very well, or that they were just responding to the problem by building bots that fought bots or something like that, which is something that’s come up a lot in conversations I’ve had with people.

There’s an important thing to understand here and that’s that the political communication business is part of Facebook’s business. So elections happen every year in multiple countries around the world. And Facebook has a captive audience for politicians who are hoping to advertise to their constituents. They also have a captive audience for anyone that’s hoping to advertise to citizens. And so Facebook got into the political advertising game but in a way that was completely unregulated.

Regulating Big Data

And internationally there isn’t much regulation of Facebook.

Internationally there’s very little regulation of Facebook. There’s been conversations in countries like Germany and there’s been some regulation there. There’s been conversations in Brazil. But we’re almost in the same place we were in 2016 in terms of regulation of the companies. So I guess here’s the point I’m trying to make. In 2016, take the U.S. election as an example. Facebook, Google and Twitter all had embedded employees with the Trump campaign. They had people working on behalf of the Trump campaign. There’s a researcher by the name of Daniel Kreiss [Associate Professor, University of North Carolina School of Media and Journalism] and Daniel did some research where he said the Trump campaign treated the social media companies’ representatives like consultants. He took active advice on how to run his advertising campaign from Facebook, Twitter and Google. Clinton, on the other hand, and her campaign treated them sort of like vendors and sort of held them at arm’s length.

So the result was that Trump’s social media campaign and the ways that Facebook and Twitter and YouTube were leveraged in advertising to the populace was much more sophisticated than Hillary Clinton’s. In fact, social media and the internet were one of the only spaces where Trump outspent Clinton on advertising. The other thing, though, here is that it wasn’t just politicians or campaigns or governments that were using Facebook to advertise during political events. Nearly anyone could have done that worldwide. And it was happening in Ecuador, [Rafael] Correa, then-president of Ecuador. There was open usage of social media to manipulate public opinion.

And the thing is, like we discussed computational propaganda. So we discussed the ways that bots get used to manipulate people or the ways that human trolls get used. But what we’re not discussing is the fact that you can use completely legitimate means on Facebook. And people absolutely did around the world use targeted advertising that was spreading misinformation to people through Facebook’s advertising structure.

So Facebook is actually facilitating that as part of their business?

Yeah. Facebook facilitated political communication and Facebook facilitated in some cases the spread of mis- and disinformation through its advertising structure. And the problem was that there was no real oversight. So I spoke to one researcher, for instance, who gathered, she said, over 70 million different impressions from advertisements on U.S. citizens during the election on Facebook. So she, this woman, gathered a ton of data on what U.S. citizens had seen advertising-wise during the election. And she said that of the ads she gathered, the majority of them had no information on who had bought them or who was doing the advertising.

There are dark ads.

So there are dark ads. So the point is that most of the ads she collected were dark ads. There was no information on who had bought them, who had sold them. … There was no information on who was behind the ads. And that’s super problematic. Think about that around the world.

Think about the ways that Facebook could be used in Turkey or Saudi Arabia or throughout the rest of the world to manipulate public opinion through the advertising structure. If anyone can buy an ad and share it with a particular group of people and if Facebook’s not keeping track of it – which they aren’t and weren’t – then that’s super problematic. There’s this sort of low-hanging fruit that is the moderation of advertisements during the next U.S. election.

So there’s been conversation between Facebook and Congress, and with Facebook and the news media, that there should potentially be some kind of tracking of political advertising on Facebook. But even that is stalled out. So the Honest Ads Act – that conversation has stalled out. Because what happens is that when you start to regulate political advertising on a site like Facebook, you start to potentially affect their bottom line very quickly because they’re making a ton of money off of this stuff. It’s a multimillion-dollar industry.

And so do you have any sense of …Facebook pushing into this realm of political advertising abroad with little oversight, I’d imagine, and how concerning that was?

You know, early on in our research we understood that Facebook had decided to monetize through advertising and that the political communication was going to be a portion of that advertising. And that that wasn’t just occurring in the U.S. and the United Kingdom. It was happening around the world. It was happening in South America, in Ecuador, Brazil, Argentina, Colombia, Venezuela. It was also happening in Asia. It was happening in Africa. Facebook was using advertising or selling advertisements to political campaigns in an effort to allow them to reach the populace. The thing was, and the point of concern that we had was, that anyone could have bought the ads. Anyone could buy ads and they didn’t even need to say who they were to the people that they’re advertising to, and they had a captive group.

There’s a lot of conversation about Cambridge Analytica and how they can do psychographic marketing of individuals. But Facebook has the infrastructure to allow for massive targeting of social groups that is completely legitimate, that it sells to people. And so in countries like Ecuador or in countries like Argentina or Brazil – the South American cases – there was absolutely the spread of political communication and political advertising through Facebook. And so the concern was this was happening well before the U.S. election. We knew that people were buying and selling political advertisements around the world on Facebook and that they were dark ads, meaning that they didn’t have any kind of notation of who had bought them.

Or who had seen them.

Or who had seen them. Exactly. And then the crazy thing was that when we actually started talking to Facebook or when I had conversations with people at Facebook they said: The idea that we would collect and store all the advertisements that occurred during an election is insane; that’s way too many advertisements. And so one of the lines of confrontation Facebook has taken is that in order to control this problem [they’d] have to collect immense amounts of data. And that’s like a huge task. And my response to that is: You’re the company, you’re in charge of it. You scaled so fast. You’re a multibillion-dollar company. So yeah, you need to track this stuff. You’re going to be held culpable. There will be regulation across the world to make sure that political advertising that occurs on social media is regulated. It’s just a matter of time.

But in the meantime, I think Facebook needs to show a willingness to protect democracy and that it shouldn’t be selling ads to certain groups. It shouldn’t be selling ads to hate groups. It shouldn’t be selling ads that are dark – that we don’t know who’s seeing them and we don’t know who’s behind them.

And this has been happening globally for a number of years now. It wasn’t just happening during the U.S. election. It wasn’t just Donald Trump and Hillary Clinton that made use of advertising on Facebook. It was the “alt-right.” It was Russia. And this was happening in other countries too.

I mean, this is the thing. You had written: Facebook has embedded itself in some of the globe’s most controversial political movements while resisting transparency. Right? So help elucidate that point for me.

Right. So the point that we’re making is that Facebook has acted as a consultant, has sold their space, has sold their users as a product to politicians and political campaigns and others around the world. And I think that there’s something that we should really be clear about and that’s that advertising is only the tip of the iceberg. Advertising on Facebook accounts for a small portion of what we see, and in fact, a lot of people probably ignore ads on Facebook. They don’t even pay attention to them. The fact is, though, that Facebook was consulting on other ways of reaching people too.

So how do you use this stuff more effectively? What’s the most optimized route for getting to the people that you want to talk to? You know, how can you make use of group pages? How can you bolster your presence on our site? This is a holistic approach to using the platform. It’s not just that Facebook was saying you could buy an ad and advertise to people. They were saying, “Let us help you use Facebook in a more effective way to spread your politics.” And they were offering it to – this was an equal-opportunity offer. But the thing was, the people that decided to buy in varied.

Facebook In The 2016 Election

Meaning what?

Meaning that, for instance, during the U.S. election that the Trump campaign massively bought in whereas the Clinton campaign didn’t. And so the Trump campaign had a huge advantage online over the Clinton campaign. And the other thing is that worldwide what this meant was that any political party within reason or any group could have bought and sold advertisements and targeted particular social groups with those ads. But also any group with enough power, with enough money, with enough capital, hypothetically speaking, could have hired Facebook to consult [with] them on how to use the platform or could have hired Google or could have hired Twitter, and in fact did. The U.S. elections [are] a really good example because it’s the most sort of poignant. And it’s the time when people really began to pay attention, but this was happening before the U.S. election.

But you could say in the U.S. election that the Trump campaign was smart in using the most powerful, potent tool to reach voters and target voters and that the Clinton campaign was not smart. They didn’t use the most potent tool in our democracy these days to reach and target voters.

Look, I mean, that’s true. That is true. The Trump campaign was smart. Whether or not they were … Whether or not this was calculated and they were like, “We’re only going to invest in social media” remains to be seen. It might have just been that their digital director was more savvy than their other media or advertisers or people running their communications campaigns. But I tend to think that the reality of this situation was that Facebook and Twitter and Google reached out actively to the campaign and the campaign said, “Sure, like, why wouldn’t we take your help?”

What we saw was a savvy use of social media in the political realm. But what we saw was also manipulation. Let’s call it what it is. Right?

… You could say that that’s just democracy at work in the 21st century, I mean, that there isn’t necessarily anything wrong with that; that Facebook is helping [the] democratic reach all over the world for candidates who seek elected office.

Sure. But you could also say that if you’re not tracking what the advertisements say, how the advertisements are reaching people, who they are reaching, that that’s a huge problem. And what’s the content there? Like, we need to know. And a lot of people, Jonathan Albright, researchers at NYU from the Social Media and Political Participation Lab, our own project, were tracking a lot of this stuff and saying [that] a huge amount of this product, of this content is problematic. It’s not just politics as usual.

Problematic, just explain problematic. Problematic in what way?

Disinformative, misinformative, laden with untruths or targeting particular social groups in a way that was wrong, I think, you know.

So basically, what you’re saying then is that these companies like Facebook are helping to facilitate actively and consult on the spreading of misinformation and disinformation?

What I’m saying is that the companies were allowing their clients to use their platform to reach constituents or to reach people and use disinformation as a means of, as a viable means of manipulating them. And so that’s worrisome. Right? Like it wasn’t just that memes were circulating that made Hillary Clinton look like the devil or things like that. It was that Facebook was selling a product. And the saying goes among social media companies, “If you don’t know what the product is, you’re the product.” And so on Facebook your data is what is sold, what is bought and sold. The advertisements are just a means to getting to you. But there’s lots of other venues of getting to the users. And so users put all sorts of information on Facebook. Back in the day, you used to put whether or not you were in a relationship.

Facebook And Political Campaigns

So yes, you’re saying it’s not just ads, there is more to it.

So the product that Facebook is offering goes beyond just advertising. It goes towards offering up users’ content. So what you and I post on Facebook is readily available to those groups. So back in the day, you used to be able to put that you were in a relationship or not in a relationship. But now it’s become what political party you’re in, who you like, what you follow. It’s become, actually, way more granular, way more focused. So when you look at someone’s group pages or the things that they join and follow or what they even are talking about, you have a really good picture for who that person is. And so there’s just sort of a means for Facebook to create a bespoke audience for anyone that comes knocking and saying: I need to advertise my product and if that product is politics, then you can absolutely do that.

And OK, so here’s the point. Right? The point is that you can advertise about politics on any other medium. You can do that on TV, you can do that on radio. You can call people on the phone, you can do this stuff. The difference with social media and how advertising and how political communication happens is that it’s completely opaque. We don’t have any information on how it occurs. There’s no regulation. OK, like the FEC doesn’t touch communication that goes on during elections on social media. So we need to be concerned about this stuff.Yes, political advertising is a part of elections, it’s a part of modern democracy, and political advertising that occurs on social media is a part of modern democracy. But we should not accept it as normal that we have no information on who they’re targeting and how. We should not accept it as normal that they’re able to access all of our data and to granularly target different social groups in a way that’s problematic, in a way that spreads misinformation, in a way that spreads disinformation, or in a way that foments offline protest or violence. Like, think of pizzagate, you know what I mean. There was a lot of different stuff being spread around, different instances where there was actual offline consequences to this stuff.

Right. You know, I’m just kind of curious because your research sort of coincides with Facebook’s push where they’re into the news business more, right, the news distribution business. And I’m just kind of curious if you can kind of place – if you think it’s significant – if you could place in context that while you’re doing this research they’re becoming much more of a news source.

Yeah, that’s fine. When we started in 2013, there was less clarity about the ways that Facebook would be used to circulate news or to help people to digest news. But as time went by, Facebook pushed more and more into this area. I don’t think it was until 2014 that Facebook decided that they were going to have trends even. It was a decision that they made that suddenly appeared overnight. And so what people were seeing was actually like an indication from Facebook that, hey, look at our trends, this thing is newsworthy. This thing is at the top of the public zeitgeist. People are paying attention to this stuff. And so Facebook made conscious decisions that it was going to become at least a purveyor of where the public was focusing its attention on the news. While Facebook never has produced written content and things like that, they have facilitated the spread of the news and they’ve also curated the news.

Here’s the really important point. Facebook curates the news that you see. When people use Facebook to digest news and that news is curated by a company, that means what you read is being controlled by a corporate entity. You might say to me, “Well, The New York Times does that and so does The Washington Post and so does PBS.” But the point that I’m trying to make here is The New York Times, The Washington Post and PBS are heavily regulated by the Federal Communications Commission. They’re also regulated by the Federal Election Commission during elections. And so the problem here is that Facebook gets to do that. They get to curate the news, they get to decide which pieces appear in your News Feed without any kind of regulation. And the fact that there’s no oversights is hugely problematic.

Fake News

And so in terms of it becoming a vector for misinformation or fake news, how does that coincide, and increasingly a vector for that, how does that coincide with what you’re doing at the time?

So at the time, because people started to read news on Facebook, to get news on Facebook, to share it there, we became increasingly concerned with the fact that disinformation was going to be spread via Facebook and it actually played out very quickly. Not only were memes being spread that alleged crazy political scandals in the United States and abroad, but also there were lots of fake articles and fake videos and disinformation, broadly speaking, being prioritized by the Facebook algorithm as popular. So I guess what I’m saying in really simple terms is: The Facebook algorithm facilitated widespread viewing of disinformation around the world. It allowed people to access this information. And the direct response, the response I always get from people when I say this is: Well, people just consume what they want to consume. People would have been getting that stuff anyway. And my point here is that Facebook made it available. It was something that Facebook prioritized, that it allowed to come through. I’m not saying that they should have censored the content, but there needs to be some system for indexing what is potentially harmful to society.

And if that stuff is hate speech, if it’s harassment of journalists, if it’s literally untruths, then we need to figure out how to moderate that content because we moderate content on every other media. We make sure that lies don’t circulate on reputable news. Why don’t we make sure that lies don’t circulate on Facebook?

And not just that. I mean your research suggests that, OK, Facebook is an ecosystem in which lies can be spread. But you’re also saying, I mean, I glean from your research, is the fact that you can supercharge that – that someone with a political agenda or someone with political power, right …

Yeah.

So help me get that.

Yeah. So what social media allow for, what Facebook allows for, is the ability to amplify your presence during political events. So it’s not just that you can advertise or that the ecosystem prioritizes disinformation. It’s that if you want to spread disinformation you have a very easy means of doing it. You can use automation, you can use the group page function to spread that propaganda. You can use advertising. There’s a number of different ways that you can come at an audience using a site like Facebook in an attempt to manipulate them.

And this is what you were researching at the time?

Yeah. At the time, this is exactly what we were studying. We were trying to figure out exactly how people were being manipulated and it was really unclear in a lot of ways because the data wasn’t being shared. The people that we were speaking to were saying, “Yeah, we’re absolutely able to do this,” but at the same time, the companies weren’t attempting to have conversations with us. There was no intent to collaborate. And to this day, I have still not collaborated with Facebook or Twitter on any research and I’m 110 percent open to that. But it hasn’t happened.

[There were] specific incidents in Mexico and in Turkey. Specific issues that the noise was around.

So in Mexico, there was a movement hashtag, “ya me canse,” [“Enough, I’m tired.”]  that was spreading. It was to do with fatigue around the politics that was happening as usual in Mexico. One of the things that we saw happen was that young people were using this hashtag to spread their dissatisfaction with the politics in the country. But the other thing that we saw quickly was that hashtag, that meme, that phrase, was quickly co-opted by powerful political actors and used against the young people, used against activists. So what actually started happening was that they would, activists would use “ya me canse” as a rallying cry to organize themselves on- and offline. They would use it as an indicator that there was a safe space to discuss Mexican politics. But the government and powerful political actors in Mexico, including cartels and others, realized that if they homed in on “ya me canse,” which was trending on social media, that they could both monitor what activists and young people were doing, but they could also inject their own views and their own ideas into that conversation.

And sometimes what they did was inject spam. So they just built tons of bots that spread junk so that people just couldn’t have a viable conversation. There was just a lot of noise. And at other times still they were using that hashtag to attack activists to create what we call a chilling effect. So what that means is basically to scare someone into not talking about something. And I spoke to journalists who had been attacked using these tactics and who had said that not only had they gotten offline, they had left Mexico because they were so fearful. And it wasn’t just that the “ya me canse” hashtag was being attacked and that specific individuals were being attacked online. They started being followed offline. They started getting people knocking on their door offline. There was real repercussions.

That’s all right. We’ve got repercussions in. But who’s the Colombian guy?

Oh, you want me to talk about him?

Is that cool?

Yes. Totally, totally. So it came to pass, actually, that after some of the events in Mexico in 2012, that someone that had helped manipulate public opinion came forward. His name was Andres Sepulveda  and Bloomberg did a big piece on him  called, I think, “How to Hack an Election.” And Sepulveda was really open about the fact that he had been hired by the Mexican government to manipulate the populace using social media. The quote from that article  that really struck me from Andres was: This wasn’t the hacking of technology of voting machines. This was the hacking of public opinion.

… And as far as we know, Facebook and other social media companies were uninterested in what he was doing?

As far as we know, the companies weren’t really aware of what he was doing, which is even more concerning. It’s not like they weren’t interested in the fact that their platforms are being leveraged for manipulation in Latin America. It’s almost as if because they were an English-speaking platform, they weren’t even paying attention to the fact that manipulation was going on in Spanish on their platforms. So remember what I said, which is that Facebook had very few people working in user experience research studying people online and how they use their product and actually talking to users.

And so what that means is that they probably had very few people working in Spanish language or other languages. They’ve scaled that up since this has happened because Facebook’s become multinational and it’s used by people in multiple languages. But how possibly could Facebook, with the employee numbers that it had, manage political manipulation throughout Latin America? They couldn’t. And the thing was they kind of ignored it. They scaled so quickly, the prerogative of scale was what damned them in a lot of ways. The question I often asked people, and it’s kind of a rhetorical question, is: Should Facebook have scaled as quickly as it did? We were wildly excited about it but we allowed it to grow out of proportion with our understanding.

And maybe even their own understanding.

And no, not maybe, absolutely beyond their own understanding. Facebook did not understand some of the implications of what happened with its platform. There’s a lot of bad criticism about how propaganda occurred via these platforms from people that are quasi-researchers or from people who are pundits.

But the bottom line a lot of the time is that Facebook itself didn’t know how their platform would be misused. They couldn’t have presupposed how it’d be misused. But what I think that they also didn’t know was: What will this technology do? Like, how will it be used in the future? They weren’t thinking forward. There was such a prerogative for thinking about right now and for thinking about let’s get to this number of users that they weren’t taking the time to think about how Facebook could potentially affect democracy, how it could affect control in authoritarian countries, and how it could be used [or] misused for the manipulation of public opinion. And that, honestly, that comes from my conversations with people at Facebook.

Facebook’s Reaction To Warnings

In that they were aware of all those concerns but they just had another prerogative which was growth.

No, in that they were open about the fact that they didn’t know how it would be misused, that they were building this fantastic product. And it’s not that there was …

I know, but this is what I don’t understand. There were people like you who were saying, “Your products are being misused.” So you can’t say that they didn’t know.

No, that’s exactly right. So the point that I was going to get to is that there was this play by the social media companies of wide-eyed ignorance about how their platforms were being misused. And that’s absolutely off-base. There [were] people, many more people than just me out there saying your stuff is being used for political manipulation, not just in the United States, not just in democracies, but throughout authoritarian countries. And so Facebook had no reason to pretend they didn’t know that there was manipulation going on. What I’m saying though is that the tool, the tool that is the social media platform scaled so quickly that they couldn’t keep track of what was happening. I’m not saying they didn’t know that manipulation was happening. But what I am saying is that they grew so fast, they let themselves grow so fast, the U.S. government let them grow so fast, that they suddenly became an entity that no one really understood. Their algorithm changed, changes so much, the features on Facebook changed so much, that it was impossible and is impossible to keep track of all the different ways it could be misused.

And again, the way of doing work at the social media company was: Build and launch and research later.

Facebook And Political Campaigns

“Move Fast and Break Things.”

“Move Fast and Break Things.” And you know, this is a side note, but the companies presented themselves and continue to present themselves as benevolent social actors. Google had the tagline “[Don’t be] evil”  for a long time. Facebook had a similar ethos of trying to protect their users and that the users would let them know if something was going wrong. The problem was that Facebook knew that something was going wrong. They knew that their platform was being used for manipulation well prior to the U.S. election and they didn’t take time to do anything about it. And the blame is not just on them. The blame is also on governments around the world for allowing the social media companies to continue to grow. So the U.S. election was an insane time. I had made a decision – this was about three or four years into my Ph.D. – that I was going to do my dissertation on the U.S. elections. So I decided I’m going to study the way the computational propaganda plays out during the U.S. election. This was in January of 2016. I had actually made the decision in the fall of 2015 that the U.S. election was going to be my specific focus and I had no idea what was going to happen. I had a hypothesis that there was going to be manipulation during the election. I actually had even written articles, like we need to pay attention to this stuff, this isa problem.

… We went into the U.S. election and what I presented to my Ph.D. committee was a systematic study of how bots and computational propaganda were being used throughout the election by a variety of different actor groups. I homed in on three different actor groups. So I looked at politicians and political campaigns and how they were using bots and computational propaganda. I looked at journalists and how they were being targeted by this stuff but also how they were making use of automation to facilitate their news spreading. And then I also looked at what I call digital constituents. So digital constituents were basically anyone who was building and using bots for their own means. And that latter category is really interesting because at first I had assumed, and I think that we had wrongly assumed in the computational propaganda project, that only powerful political actors were going to make use of these tools. But what became really apparent during the U.S. election was that a variety of different groups and a variety of different individuals were making use of computational propaganda. It was not just the Russians. It was the “alt-right.” It was far left. It was everyone in between that had figured out by this time in 2016 that they could use social media as a means to game the politics of the system; that they could use social media to trick people, and that they could use social media to amplify their perspective over others’.

And so what I did in the U.S. election was for nine months off and on, I traveled around the United States. So I went to Detroit during the primary there and saw Clinton speak and I saw the Republicans and I talked to a variety of different people there. But I guess the point here is that I traveled to various cities in the United States including New York multiple times throughout that period to discuss how these tools are being used and misused during the election. And so that meant that this was fundamentally an ethnographic project, which means that I was doing field work. I was talking to people, hanging out with people, and getting a networked perspective on the event, like how was propaganda playing out during the U.S. election. And it quickly gained legs of its own. It was like in May of 2016, I mentioned we wrote this Wired article saying bots could ruin the election. And then it just cascaded. Things just got crazy because the amount of disinformation became so much that I couldn’t even keep track of it anymore.

Our project at the time at Oxford was collecting a lot of the data from social media in an attempt to keep track of things. Simultaneously, I was interviewing people and it was almost as if the situation just blew up. It was like, wow, we knew that there was going to be some manipulation on social media during the U.S. election. We didn’t know to what extent it was going to play a huge role in not just like driving conversation but also in prioritizing particular candidates’ communication over others’. And you know, there was a number of different moments throughout the U.S. election, like some of the people that I spoke to were digital political consultants. So I actually spoke to consultants that worked for the Trump campaign. I could never quite get anyone at the Clinton campaign who worked on the digital team to talk to me, not for lack of trying. But I talked to some people who had worked for Trump and they had said things to me about how social media was their primary focus and how they were going to use it to control conversation. And they suggested to me both during the election but also after the fact that the tactics that were used by the Trump campaign on social media, so to speak really brashly, to spread untruths, to push people to attack specific individuals, were all actually part of a larger plan.

I kind of don’t know if that’s actually true or not, but I think that what they realized was that that was very effective; that they realized that one day you could be talking about Trump slandering John McCain and his history as a war hero and the next minute, you know, you’d be talking about something completely different because social media had pushed the conversation.

So I mean, but these were people that had worked or consulted for the Trump campaign that said to you that this was a part of the strategy, was to basically create incendiary content that would go viral on social media platforms?

Yeah. The crazy thing was that at the time I was really focused on bots. And so I wanted to know whether or not the Trump campaign was making use of political bots to drive up or to amplify its speech or whether the Clinton campaign was using these, and so I asked them lots of questions about that. But what I realized kind of as a byproduct of this research was actually something more interesting, which was that it was a more holistic approach to treating the internet like a petri dish – so actually running the experiment and basically pulling no punches. In the words of one person that worked for some prominent Republican campaigns in 2016: On social media we throw everything against the wall and see what sticks. And so the idea here was like, how do we get content to go viral? How do we get people to pay attention to us and how can we control the dialogue?

And this went well beyond Obama’s use of data to contact undecided voters in 2008 and 2012. This was like an all-out campaign on social media. This was something completely new. This wasn’t something we’d seen.

Wow. I mean, is, are there … If you could point out in our last 30 seconds to a specific moment that you think completely typifies this that we might be able to use.

Yeah. So midway through the primaries or actually towards the end of the primaries, I was in New York City and the Republicans and Democrats held a simultaneous primary there. And so my goal was to hang out with the Trump campaign and hang out with the Bernie [Sanders] campaign and hang out with Clinton and [Ted] Cruz, who was still a part of it. One evening I didn’t have a lot on early in the evening and I was scrolling through meet-ups about the digital communications of the campaigns and I landed upon this group that was working for Ted Cruz who was having a meet-up in a tech store in Chelsea. And I thought to myself, I’m going to go to this meet-up, I’m going to check this out because I have nothing better to do. There’s a Clinton event later.

So I show up to this tech, this weird tech store. And they’re like, “Oh, yeah. The event’s in the basement. Ted Cruz’s digital team is going to be downstairs talking about how they made use of social media.” I went downstairs and it transpired that it actually wasn’t Ted Cruz’s digital team. It was a consultant that was working for Ted Cruz that was there and there was like maybe 10 people in the audience. And the consultant’s name was Cambridge Analytica. They were a firm that, at the time, no one really knew about but that later went on to work for Trump and claimed that they could use psychographic marketing – so manipulation of people’s psyche with individual targeting to massively affect how people thought, felt and reacted to the election.

And basically, what happened at this meeting was Cambridge Analytica opened up their playbook. They said, “We use this particular model called the OCEAN [openness, conscientiousness, extroversion, agreeableness, neuroticism] model to manipulate people on social media. We have a long background. Our parent company has worked for politicians and regimes around the world, including multiple authoritarian regimes, in an attempt to spread information or a lack thereof around these pivotal events.” And I was just flabbergasted. I was like, wow, these people are really opening up the playbook. At that time it was a foregone conclusion that Ted Cruz was pretty much out. He was going to be gone. This was Donald Trump’s home turf. And he looked likely to win the Republican primary in New York state. And so I feel like at the time Cambridge Analytica and their parent company had decided that they could be more open.

Their chief data scientist was there and their head of marketing and they just let us ask questions. And so I asked lots of questions like, “So, wait, you’re telling me that you’re doing like wholesale manipulation of [the] public online?” And they’re like, “Well, this is totally above the board. Like you know, people might not quite know that we’re the ones buying the advertisements or this and that, or that we’re using automation to make these connections, or that we have data on over 230 million Americans.” That’s what they claimed at that meeting. “We have data on, we have data sets [on] over 230 million Americans that will allow us to manipulate them during this election. But this is all legit.” It wasn’t until the summer when Cambridge Analytica went to work for Donald Trump that I realized, oh, my gosh, it’s not necessarily that Cambridge Analytica is able to do what they said they could do, because I think a lot of what they said was marketing. But it was that the culture had gone to such a place that groups like Cambridge Analytica understood that they could make a lot of money by doing really underhanded things with political communication on social media, and that that might have not been realized fully as psychographic marketing, individual targeting at that time. But you can bet that in 2018 and 2020 and around the world, that individual targeting is absolutely going to become a possibility.

And also that in a world in which we’ve given up all of our data and our privacy, we’ve made ourselves completely vulnerable to this type of manipulation.

Right. I think that there was sort of an un-thinking-ness to how we used the internet early on because we all thought of it as being a democratic technology and so we put a lot of information online. There were very few people – and those that did know can pat themselves on the back now – but there were very few people that knew that what they posted on social media was going to come back to haunt them later. I remember back to 2005, as a freshman in college, thinking I can post whatever photos I want to share with my friends, this is hilarious. And there was stuff online that I hope people never saw, you know. And now the thing is, is that all that information has been gathered by companies like Cambridge Analytica through underhanded means, but also through buying it from credit companies. And suddenly we’re in a place where they have really good knowledge of how we think, feel or act.

originally posted on pbs.org