Zeynep Tufekci is an associate professor at the UNC School of Information and Library Sciences, a New York Times contributing opinion writer, and the author of Twitter and Tear Gas. This is the transcript of an interview with Frontline’s James Jacoby conducted on May 22, 2018. It has been edited in parts for clarity and length.
So let’s start out in the Arab Spring. Bring me into what work you were doing there. You were studying the uprising, correct?
So tell me the story of how it was that you ended up there.
Right. As someone who had been studying social movements and censorship and all those things, I was actually quite familiar with a lot of people who ended up playing prominent roles in the Arab Spring, because the people who ended up being very visible on social media and doing a lot of the online speaking and organizing, they didn’t just pop out of nowhere. There had been years of conferences, blogger meetings and all those things. It was kind of these ragtaggy small groups of people who kind of came together. For them, of course, it was quite significant, because until quite recently, they had no means of finding one another. And these are people across North Africa, in Lebanon, they’re just – Syria. There’s a lot of people who had started connecting to one another.
And because the government thought, “Oh, it’s the internet; it’s just a tool; what could happen?; they’re just playing games, cat pictures,” they kind of let them be. Also, a lot of governments saw it as economic development, so a lot of these political bloggers were also slightly technical people, because internet wasn’t as easy to use then. There was this natural overlap of they work for start-ups: they work for whatever they did, IT support. So a lot of governments left them alone, and they used that opening to get in touch with one another and to organize across the country.
… In late 2010, when the first Tunisian incident happened when a fruit seller set himself on fire – a horrible incident – there were these protests. We were kind of watching as the government tried the same kind of techniques, the true and tried techniques of censorship in which there is an incident in a town, people are unhappy about something, and what you do is you go and you block the town; you put lots of police – you know, either fire at the protesters or fire teargas or water cannons, whatever level of repression you are willing to do, and isolate them, and wait for it to die out, because in isolation a social movement can’t really survive.
They’ve done this many times before. In Tunisia just a couple of years ago, there had been a similar incident in a mining town, and it had the same playbook by the government: isolate, censor, crush. But what happened then was that the internet and network public sphere that had already been people connecting were able to constantly get news out and constantly talk to one another. It was this combination of expat Tunisians, people inside the country and all these journalists who were starting to look at it and say, “Wow, what’s going on there?” … When the Tunisian uprising happened and [then-President Zine El Abidine] Ben Ali fled, it was a big jolt. It was a big jolt for everyone, because it was really the first time where we saw digital technology aid a social movement, break censorship in a very successful and highly public way, and get what they wanted, which was for their president, who was this multi-decade autocrat, to run. And it had happened mostly through Twitter and Facebook, because those were the platforms everybody was on.
Of course there were other things, too. It had happened through these larger platforms, and partly bloggers, too. And everybody was like, “Whoa, wow.” In the West, policymakers, newsmakers, social movement people like me were going, “Whoa, look at that.”
But more importantly, people in other Middle Eastern countries, like people in Egypt, were like, “Look, what they did,” and it wasn’t just “Look what they did,” as in, “Look through CNN,” because in the past, if you were watching some sort of major uprising or revolution, you would be watching it on CNN or Al Jazeera if you had a chance. You’d be watching it from afar, like you’d be watching this sort of sky-high view coming down. Instead, they were talking to them. They were like: “How did you do this? This is amazing.”
Then there was a Facebook page that was created by Wael Ghonim, who was not even in Egypt at the time. He worked for Google as a marketer, and he was in the Gulf region. The Facebook page was called We Are [All] Khaled Said, named after a young man who’d been brutally murdered by the Egyptian police, and his heartbroken family had put up before-and-after pictures, the smiling young man and his body after he had been tortured by the Egyptian police, which caused this huge uproar online, because people could see it and spread it.
So he created this page, We Are All Khaled Said, meaning Egyptian youth felt like this; they could be next. In fact, a lot of people I interviewed said this was a great motivation to them because they could be next. This random person wasn’t even very political, being killed by the police so capriciously. That page started growing so fast. Just everybody was joining, and people were kind of communicating.
A lot of times people point out to the fact that at the time, there wasn’t that much internet in Egypt. That’s true, but there was enough, right? You don’t need everybody to be online; you just need enough people to be online for that kind of communication to spread. And young people were, of course – usually they are – early adopters, and they would just see this, and they would show their parents, and they would show their cousins, and they would sort of start showing all of this.
There’s a spirit of kind of techno optimism in what you’re saying at the time. Was that kind of the prevailing ethos, you think, in terms of the movement in North Africa and others at the time?
There was definitely a spirit of techno optimism in the sense that these tools seemed both amazing and governments were so clumsy about it that a lot of activists I talked with, they believed that this tool would always be on their side, that the governments composed of older people and bureaucrats would never be able to sort of grasp this power.
In fact, you see this later in the story. A lot of people start out thinking the tool will always be our tool, like the powerful cannot come and invade our space and take this tool and turn it against us. But if you look at the history of technology, the powerful always invade that space and then take the tool and then adapt it to their own needs. But at the time, it seemed like they would never get it. …
It’s all very optimistic but at the same time I mean and you’re alluding to it. Is there a recognition or were there people in your sort of circle of thinkers about this stuff and people that studied it that were also recognizing that the internet was all of a sudden becoming centralized at that point in time and that there’s a potential concentration of power there? Tell me about that in the context of that moment if you could.
… You see this major centralization that’s driven by a bunch of things, probably the most important of which is network effects. Now, that’s the term. “Network effects” means that something is more useful to you if there are other people on it, so if you had a fax machine – which younger people will be like, “What is that?” Let’s say you had a phone that could only call people from the same phone company. If you have Verizon, you can only call Verizon. If you have T-Mobile, you can only call T-Mobile.
If there were more people on Verizon, you’d be like, “I’m going to go there,” right? If more of your friends were there, you’d just go wherever there were more people, because a phone that gets great reception but cannot call anyone, that’s not useful, right? Internet platforms are like that. The more people there are, the more useful it is. So Facebook, once it kind of got beyond the threshold that it got lots of people on it, all of a sudden you had to be on it, too.
You might not like the company; you might not like its privacy policies; you might not like the way its algorithm works; you might not like its business model. But what are you going to do? You can’t just sit it out. So between 2004 and maybe say 2014, in those 10 years, you see this runaway effect in which Facebook manages to grab a significant number of people on the platform.
Once you do that, other people who want to talk to those people start getting on the platform, too, and it starts sort of spreading around the world, which really intensifies the effort. For example, in Brazil, there was an alternative social media platform called Orka which is quite popular, but Facebook had taken hold in Brazil – I’m sorry – Facebook had taken hold in U.S. and Europe, so if you’re a Brazilian and you want to talk to your expat community, you had to get on Facebook too. It just didn’t make sense to maintain two social media accounts.
Facebook As A Privatized Public Space
… But I mean there’s something that’s inherently a little bit different about a company like Facebook, and I’m curious about your thinking going back, putting yourself back in the time of – you know, this is a public space to some degree, right? It’s like a private corporation in charge of a public space. Tell me, was that something that was on your radar at the time, and can you talk to me about that a little bit, just kind of putting yourself back in the 2011s, around then?
It was before the Arab Spring when I started calling this a privatized public space, because it was a public space, right? You went there; you talked to friends. There was a lot of politics, a lot of civic things, but it was also like being in a shopping mall. Shopping malls are kind of like that. They look like a public space, but if you go try to distribute a leaflet, they’re like, “Oh, private property, go away.” If you come in as a shopper to spend money, they’re like, “Hi, welcome, public space, walk.”
You get this feeling as if it’s a street, but if you come with anything they don’t like, all of a sudden, they kind of bring out the fact that it’s a private space. I started calling Facebook a quasi-public sphere – not really public, not really private – and I started comparing both Facebook and Google to shopping malls before the Arab Spring 2009, 2010, because it was pretty clearly apparent to me that’s what they were.
I’m not sure it was as apparent to the company, because I think they saw their public space role, and they were proud of it, and I think they didn’t think it was going to cause them a headache, right? They’re like: “Oh, this is great. We’re like the super public space.” Then when the Arab Spring happened, I know that a lot of people in Silicon Valley thought our technologies helped bring freedom to people, which was true… A lot of people were very grateful to Facebook and Twitter, and people at Facebook and Twitter were thrilled that their thing they created had been used to bring people freedom. What’s important to remember here is that the optimism was not unwarranted. Neither is the realism. That, I think, is important, because these platforms did bring new freedoms that didn’t exist. Looking back so many years after in 2018, you might think, oh, the optimism was completely empty and hollow.
That’s not true. It’s both true. There was a great deal of connection and freedom that has come from it. It’s just that it’s not so simple, right? There’s all these other complications. … In fact, this is the post-Arab Spring process, where it’s not just governments. Other dissidents, other people who wanted to be a political opposition and found themselves otherwise censored on mass media, they all go on Facebook, and they hit Facebook rules. Facebook Terms of Service, right, it requires that you use your legal name, and in some countries that’s not really safe.
There are things that you post, and then Facebook moderators remove it, and all of a sudden you have no recourse. You write an email or something into a black hole. You know, good luck. Facebook doesn’t respond. I heard of cases – this is true for Google and YouTube, too. Like Blogger, which was owned by Google, there would be an Egyptian dissident who had this page documenting a really important case of police brutality, and all of a sudden his blog post would disappear, and it would say “Copyright violation.” He’d have no clue.
Because I was in the U.S. and because I had been writing about this, I started becoming a little more noticed in traditional media, but I still had this interface with all the activists, right? They started coming to me because they could get no traction with these giant companies operating at the scale of billions in Menlo Park or in Silicon Valley.
… So I found myself spending an enormous amount of time going to these companies, to people I know, and saying: “Can you take a look at this? Can you take a look at this? Can you take a look at this?” (Laughs.) There were days I would be spending hours because there are so many cases like this, where people would just be coming to me and saying, “My page that is followed by hundreds of thousands of people is gone without an explanation, and Google won’t respond“; “My YouTube video, gone without explanation“; “My Facebook page, gone without an explanation.” I appealed to the degree – sometimes there’s no way to appeal.
I’d just be calling friends in the companies, and somebody would fix it. That was a clear indication that this model was a couple of things. One, it was a clear indication that these companies were terribly understaffed, in over their heads in terms of the important role they were playing.
All of a sudden you’re the public sphere in Egypt, right? You’re the public sphere in Burma/Myanmar. You’re the public sphere in Indonesia, in Sri Lanka, in Turkey, in all these places. But nobody’s looking over this, right? This giant page with hundreds of thousands of followers just disappears, and it’s this black hole in Menlo Park that you can’t get a response to?
So I kept starting to talk to my friends at these companies and saying: “You have to staff up. You’re making these giant amounts of money. You’re very profitable. You’re growing really fast.” And a lot of who are operating, like Facebook at the time I think was like 10,000 people or something. This is ridiculous, because it involves a lot of engineers. I was like, “You have to put in large amounts of people, and you have to put in large amounts of people who speak the language, who understand the culture, who understand the complexities of wherever you happen to operate.” I think it’s only in 2018, 2017 after the U.S. election crisis, we started hearing publicly from the company, yes, they were going to do that.
But what was the response you’d get at the time from your friends at Facebook, when you’d say you need more people all over the world to deal with a whole host of problems that develop on the platform?
Couple of things. A lot of people that I talked to would be on policy teams, and they would be like, “We agree, but we don’t have that much power”; like, they didn’t run the place. So even people who agreed with me are, they didn’t run the place. A lot of times there would be little response. They’d be like, ”Yo, we’re just trying to keep up, right?” They’re kind of in over their heads. And sometimes it would just get into a discussion, because they would be like: “Do you really want us to moderate? Do you want us to be the minister of truth?”
Some of my friends at these companies would also raise the philosophical questions. They would say, “Do you want us to become the minister of truth and have tens of thousands of people deciding these things, because the situations are complex?,” and, “Should Facebook or Twitter make a decision to take anything out or let it stay?,” and, “What if it’s full of hate speech in a country with ethnic violence?” Say in Thailand you’re not allowed to insult the king, but in the U.S., that violates their understanding of free speech. What should Facebook do? So there were all these complicated questions philosophically, too, but in the end, the reality was that these companies just did not staff up enough on the people side of their business, because they were under the illusion that they were in the technology business. For years when we had these conversations, I was like: “You guys aren’t in the technology business. You guys are in the people business, so you need to staff up on the people side of your business.” But the people who run these companies – the founders, the early executives – they come from the technology world. Now, I come from that world, too, right? I’m a former programmer, so I understand the mindset there.
The mindset there is: “How can I write a program? How can I write a script that’s going to do something at large scale without lots of people,” right? You want to have a calculator. You don’t want to hand-calculate a million things. You’re like, “Can I write a little Excel formula and just do it,” all right? That’s the mindset of a programmer. You want to write one thing and have it work at a scale of billions. Now, my technology side really understands that, but I’m also a social scientist, and I would tell them you cannot do that with people, because you might have a billion people, and there is not going to be one formula that works in Burma/Myanmar that also works in Germany, that also works in Canada. That’s just not possible. It’s too messy. The human diversity and social and political reality is just too messy for you guys to do this from this sort of mile high view of like, “I’ll have one set of rules for the whole world.” I think they kept trying to do that. It was partly philosophy, the geek mindset, and it was also cheap. … Of course not having enough staff is good for your profitability because people are expensive, but writing a computer program that’s going to do one thing for a billion people is much cheaper, right? So these companies ran on very small staff. You know, it’s only recently that Facebook kind of broke 20,000, and this is a company that serves 2 billion people, and that 20,000 includes lots of engineers, ad people, right, not enough on the people side of the business. Similar thing with Google.
Philosophically, convenience-wise, profitability, technology-wise, it just made sense as far as I can tell to these companies’ leadership to keep thinking they were in the technology business and denying that they’re in the people business.
…And I think what happens is this is the problem; this the million-mile view. They’re mile-high. They’re in Menlo Park, and they’re like, “Let’s experiment.” I think someone just looks at a global map and says let’s pick countries around the world with similar sizes. It makes sense from an experiment, and nobody in the room has an understanding of the global political context. And I don’t think you need the advanced degree in South Asia and Central America and all of that.
You know, if you know anything about – if you’re just slightly informed, you would know these are not countries you pick. In fact, after this happened, I heard from a lot of people in these countries who are from news organizations who said, “Fake news and misinformation is completely taking over, because we can’t even be there to try to compete with it.” This is the problem with these companies, is that they’re operating at this giant scale of 2 billion, and they’re operating out of a small part of the world – Menlo Park for Facebook – and the scale makes it hard. They don’t have the institutional and infrastructural knowledge. They don’t have the people to try to understand, and even then it wouldn’t be easy, right? I’m not going to pretend like anybody has an answer to how you manage 2 billion people. This is really hard.
They don’t have that, and they don’t have the kind of mindset you need. I say the “paranoia teams.” I’ve been joking with my friends in this company, you guys need the paranoia teams because they’re a bunch of optimists, and you can understand why that would be, because if you work at Facebook, and your stock options have matured, and you’re making lots of money, and you’re in your 20s, life’s great, right? They don’t have the kind of proper mindset you need in what could go wrong. In fact, a lot of my friends in these companies are from the security teams, because the security teams have the paranoia mindset, but they don’t run the place; they just sort of keep hackers out. …
Warning Facebook Of Dangers
It’s interesting to me that you talk about they should have a paranoia department to some degree, but like, you – tell me about this kind of – you’re a member of this kind of interesting international group of people that have been calling this out and seeing it for what it is for a while. Just kind of give me a sense of that. Are you ragtag, or what are you guys?
This group of people who were having these conversations both among each other and kind of begging the companies is a mix of scholars, activists and people in civic technology. This is not a bunch of people that are these sort of pessimists by any means. These are people who early on saw this technology’s potential, right? They’re in the space because they believe that all these potential things that can come from this technology can be great.
But they’ve also got their ear to the ground, so they’re hearing all of this; they’re seeing the centralization. And they don’t work for these companies, crucially, right? This is where the academic space becomes really important, because you had two choices back then. You could walk off your academic job, triple your salary, quadruple your salary – depends, you know; sometimes even more – and work for these companies, or you can try to stay out and just be an academic with a modest salary. It’s a great job, but it’s not as lucrative as working at these companies. But you would try to remain a watchdog.
So there is in fact a large number of academics in this space that have been trying to do this, and I was just talking to someone –
By “this” you mean what?
There’s a large number of academics in the space that have been trying to warn of all these issues with these giant platforms becoming so centralized and their algorithms.… It was mostly ignored. It was seen as unnecessary paranoia, and it turns out it was necessary realism. It was seen as being against technology. And I still get this. People tell me, “Are you against technology?” I’m like, “You are kidding me, right?” I’ve been programming since I was a kid. I love this stuff, right? I would identify easily as a geek. And give me some cool tech, I’m so excited about it. It’s partly because I’m so excited about what technology can do and all the alternative ways it could operate.
That’s why I’m here saying let’s not go down this road. This isn’t a good road to go on. There’s a lot of people in the space, this little sort of ragtag – there is no institution that does this. It’s just a bunch of us – a lot of friends, we come together at conferences, we go to things, and people who are deeply motivated to make things better, because they know that it can be made better, right? This conversation I would say was not quiet, but it was mostly ignored, because the way the Silicon Valley I think works is that they have this disproportionate respect for either people who find companies, like the founders, which is kind of important. If you’re founding a company, if you’re starting a company and if you’re a programmer – and as a person who started as a programmer, that’s a very different skill set. It’s a narrow[er] skill set than the kind of skill set we’re talking about. So it was kind of ignored. It was kind of like all squishy stuff, humanity stuff, people stuff, and, you know, we’re doing all this cool tech, and I’m kind of like, “You are in the people business.” This has been my refrain for years, is that “You are not a tech company; you are a people company. You have to act like it. You use technology in the way you operate your people company.” But that realization, I don’t know if it sunk in yet anyway.
Facebook And Ukraine
Let me ask you this, though, on Ukraine. I mean, to your point that you write about, which is that what you start to see is this shift toward confusion; that the state starts to use a tactic of confusion because censorship no longer works, because – I want to characterize what – it’s your writing, so can you frame it that way?
So you start seeing two governments, Russia and China, who are really advanced, both creating a new playbook. The old tradition of censorship is that you find the voice, and you shut it off; you keep them off media; you put them in jail, and if you put them – like, you quiet them. … In this new public space where you can’t necessarily block people’s access to information, what you could do is make that information useless by drowning it in this information, challenge its credibility, flooding the place with so many fake stories and hoaxes that nobody knows what’s real anymore – distracting people, right? What you’re doing is that you may have access to a piece of information, and that information may be really valuable, but you’re like, is it true? Is it not? Oh, look at over here, and oh, look at all these other pieces of information. Oh, by the way, something’s happening here. And people say this was fake, and is this fake? You are confused. So the government hasn’t cut off or the powerful haven’t cut off your access to information, but they have cut off the link between information and meaningful action by making that information not useful.
You see this manipulation of attention in two cases in 2014, which is this big turning point in terms of demonstrating how this works. One of them is Hong Kong; there’s a movement called Occupy Central. And one of them is Ukraine, where you have this movement for freedom. In Hong Kong, right, the activists, young people, occupy Hong Kong because China’s trying to control what little elections they have. At first I remember hearing a lot of talk that they might cut off the internet; they might cut off access to, say, Facebook; they might cut off access to other things. There was even talk of this sort of anti-censorship technology, where they, I think it was called FireChat or FireSite, where people would be able to Bluetooth to one another. And I’m thinking, hmm, now that’s a Mubarak move, cutting off the Internet. That’s like this naive move. I didn’t really think China would do it. I’m just watching this. And what happened in the China case is that the central Communist Party of China – and I remember watching and thinking, it’s like I’m their evil adviser – they did exactly what I thought a really smart government would do to smother a young, spontaneous movement.
First there’s a bit of teargas and pepper spray, and the kids used umbrellas to protect themselves. It was really quite colorful, very visual. And in this day and age, colorful and visual, you’re on the news. You know, boring talking head like me, harder, right? So they get these really colorful visuals so they get on the news, and the pepper spray gets them sympathy. And all of the sudden, the pepper spray is pulled. They just let them be. The kids upset some local merchants, and the local merchants hire some thugs; the thugs are pulled. I’m kind of like, they’re doing this on purpose. They want to make sure that the movement doesn’t get attention.
And everybody is expecting to cut off the internet. I’m thinking internal[ly] they’re not, because if you cut off the internet, it’s going to be this big brouhaha and second layer of attention. They’re kind of like: ok, have your internet. They didn’t touch the internet. I don’t think it even slowed down except through congestion. And they waited it out, because the problem with these sort of visually fueled movements is that they don’t have the structure to see things long-term, so their internal bickering starts. And that happened in Arab Spring, too. Anyway, got out, and then the movement kind of crumbles on its own, and only years after, China is now one by one arresting the leaders of that movement, who were very young at the time, in their 20s, and the world’s not paying attention at all, right, because they waited to make sure – because if they arrested them just a month after, they’d get attention.
They kind of let the movement be denied attention, and then they do this. So this is this new way. In Ukraine you start seeing an enormous amount of misinformation, a lot of fake news. You just can’t tell what on earth is going on. This insistent denial – this insistent denial of things that are clearly obviously true, too, they’re just officially denied. Then you do this enough, and you’re kind of confused, because if you don’t have half your life [to spend] trying to verify stuff, what’s true what’s not, you start seeing offensive hacking. They turn off the electricity at Ukrainian power plants just for a few hours. You know, this could happen. What a nice country you have. A shame if something should happen to it, right?
You see this very interesting way of disinformation, misinformation, fake news on the one hand and management of attention from the Chinese Communist Party on the other hand, and for me, those were these two big demonstrations that authoritarian governments had this figured out, right? They knew what to do. They had a new playbook for censorship and social control for the digital age. And it definitely involves surveillance, right? Surveillance is one thing that people have always worried about, and there was already surveillance, and that’s an important part of it. But it wasn’t just surveillance. It was managing attention and flooding information space with a lot of things so that it wasn’t useful anymore.
And with Putin, for instance, you know, there seems to be a recognition, especially in Ukraine, that with Facebook, for instance, right – and this later plays out in the U.S. elections – but when there is such a centralization of power among platforms, then it sort of becomes an obvious move in some part by authoritarians to try to figure out how to manipulate that. I mean, is that like – there’s sort of a recognition, it seems, in Ukraine in the storyline of like “Oh, this is, if everyone is here, then this is where – this is where we need to play.”
I think it goes back – there are two strategies, right? One is the Chinese strategy, where you start your own. But you need to have started 20 years ago, and it helps to have a country of a billion. The other strategy is if you can’t beat them, you join them, and you fight there. And you start seeing this even in the Arab Spring. You know, the Egyptian military starts putting out its communiques only on Facebook, right, so it’s not just something that Putin did. It’s that – what happens is they, the governments, authoritarian governments alternate between a strategy of “Can we keep this out?” – and that’s not very viable in lots of places – or “If we can’t keep this out, how can we corrupt the space so that we can control the narrative and we don’t let it become used for meaningful opposition?”
You see that I think more in the case of Ukraine. Now, to give you a Ukrainian example, though, Facebook is so prominent there that if you want to connect with a Ukranian activist and you say, “What’s your email?,” and they’re like, “I don’t use email; Facebook friend me,” right? So they want you to Facebook friend them to have a conversation, because they probably have an email account they started to get on Facebook, but they don’t really use it. In Russia I heard from lots of dissidents that they actually preferred Facebook because it allowed them to better verify if somebody was fake or real, because it’s mixing personal with politics, and they’re trying to suss out informants. There’s all sorts of ways in which for dissidents, this became the public space. But what happened is if that’s the public space, that’s the public’s sphere, the government is like, “I’m going to go there, too,” because governments aren’t stupid, right?
They may be late to the game because they’re kind of sort of all traditional institutions, but they’re not stupid. And more importantly, they have a lot of resources. They have large amounts of resources, and they’ll put those resources to try and figure out the new game.
… Do you think it should have been a warning sign to Facebook that Putin was gaming the system in Ukraine to try to spread disinformation?
I think everything that happened after the Arab Spring should have been a warning sign to Facebook. It wasn’t just Putin and Ukraine. It wasn’t just the new military regime in Egypt. You saw this all around the world, in which governments looked at what happened to Mubarak, looked at what happened to [Libyan leader Muammar Qaddafi], and said: “We are not going to turn our back on the space. We are going to infiltrate the space; we are going to flood the space; we’re going to try to get hired by these companies.”
Facebook And Political Campaigns
A couple things are happening, right? Once Facebook goes into a country, it often does expand the amount of free speech in the country greatly, because before you have to get on TV, and all of a sudden you can just be a Facebook page and boom, right? You can reach hundreds of thousands of people. So that’s part of the reality that has to be acknowledged, like this isn’t some fake thing; this is real, especially freedom of speech. On the other hand, its business model is to make money by selling people’s attention, right? Facebook doesn’t sell people’s data; Facebook sells people to whomever’s paying it. In many cases, governments realize they could do two things. One, they could start their own organic pages.
They could hire people, which they do. And they could have these professional – sometimes people call them troll armies, but these are government employees, and their job is to post stuff on Facebook, pro-government stuff. So they could [and] do use the organic-reach path, which a lot of governments do. They could also go pay Facebook and say, “Here, let me reach these people, and let me get my message out.” Facebook has teams of people who advise politicians around the world on how to better be a customer of Facebook – you know, “You come give us money, and we’ll do your targeting for you.”
There are these two particular ways governments can outpace activists, because they can hire lots of people, and once you have lots of people in a space, because Facebook is so understaffed also, they don’t really figure out what’s going on unless some big scandal hits some little country, right? They don’t have that kind of – they don’t pay attention to all the countries. And also if their client is the government or a major politician, they do a great job delivering people’s attention to them for the money because of their business model. There are two ways in which governments have really gotten ahead in the Facebook game in many countries.
And what about the ethical implications of that on Facebook’s part?
Well, I mean, that’s the kind of question you want to ask Facebook, is that –
Have you raised questions about the ethical implications of that type of work?
Well, not directly in a conversation, but of course. I mean, I think it is wrong to sell people’s eyeballs to whomever is kind of paying you without being more discerning about what’s going on. I mean, I actually think that the whole business model is prone to corruption just by the virtue of it because what the difference is like in the past you had advertisement. Right. You always had advertising. Fine. But it went to a broad public, right? If it was on TV, it went to a broad public, whereas what Facebook does is profile you and surveil you and take all the data about you and figure out you as a person, not you as in the sum demographic pool but you as a person, and then sell access to your eyeballs, to your attention, to whomever’s paying that.
And it’s not that in a public way, right? You turn on your phone, and you see it. Your neighbor doesn’t know that that’s what’s happening. And that’s the crucial difference between old-style advertisement, which hit demographic groups over the head with a club, didn’t work too well, but this Facebook scalpel approach, where you find the person and opaquely, almost secretly. Like it’s just between you and Facebook, target their eyeballs for a price. … In fact, I first wrote about this in 2012 after Barack Obama got elected, and everybody was celebrating how great it was that he’d used this technology to target people, to identify potential voters and advertise to them on Facebook, and I thought wow, you know what? That is really prone to misuse, because we don’t know what he says. We don’t know what’s going on. And it wasn’t about the candidate, right? He’s using what tools are available. But I thought, these tools are really dangerous; that the having advertising that you can’t see publicly, having information spread that you can’t – there there’s no accountability, there’s no controls, this kind of individual profiling. So I wrote this this op-ed for The New York Times – I think it might have been my first New York Times op-ed – saying look, beware of the smart campaign. It looks like something to celebrate, but there’s all these potential dangers of misinformation and stuff happening in secret outside of public view.
I got such a huge backlash. It was kind of like this, it’s 2012, right? The Arab Spring just happened, and it was kind of like the Arab Spring activists who thought this technology will always be on our side, right, because the Obama campaign was full of data science people, and Silicon Valley has a lot of liberals and libertarians, and their argument to me was that like, “Don’t you like our candidate?” I said: “It’s nothing personal about your candidate, right? This is really not about your candidate. It’s about these tools, and somebody has to think, what do these tools mean for the public sphere.”
And I remember very clearly, very high-up people from the Obama campaign, from the data science team, from the social media team, I heard two arguments. One of them was like – well, actually, I heard three different arguments. One of them was like, “You’re out of your mind; this is ludicrious.” So that’s one argument. One argument was: “You’re right: These tools are horrible, but it’s a race. We use them; the other side uses them; and we will use them.” And the third kind of argument I heard, the version of this argument was like, “These tools will always be on our side because they require science. They require being empirical.”
I literally heard this said to me by a very high person in Obama’s visual team is that Republicans don’t believe in climate change; they don’t believe in science; therefore they’ll never be able to hire data scientists, and therefore they will never be able to use this technology sufficiently. And my response at the time was like, “You’re out of your mind.” That is not how history works. That’s just not how history works. And the second thing that I thought to myself was that they don’t need to hire data scientists; Facebook will do that for them, right, because that’s what the platform is, and that’s what I was writing about is that having this kind of platform be so dominant in politics but so non-transparent, because at the time, if you bought a political ad on TV, you had to say, “I bought this.” You know, you hear this, but you bought it on Facebook; it just went dark. Nobody saw it, and I said, “This is not good; we should have a public record.” And I was just dismissed by a lot of people who [said], “What’s wrong with you? You don’t like our candidate.” You know, just personalized it. And it was quite stressful, too, because I thought, I’m making an argument about the public sphere.
So in 2016, when all this came to be, all of a sudden there are people – there are people who call me and said, “You know, you were right about all of this.” I said, “You know, it’s kind of – if you just look at history, and you look at how things happened, this wasn’t that hard to predict.” In fact, you know, just ad transparency, which is like the minimal thing. Facebook dragged its foot for years after all of that, and so did Barack Obama’s administration, right? Because I think it’s like the Arab activists. They thought we use this technology once, and there was this really light touch in terms of oversight and regulation. In the meantime, Facebook buys the competitors. It buys up WhatsApp; it buys up Instagram; it copies Snapchat’s innovation – it’s allowed to do that, like the stories, all those things.
So it’s just sort of, well, not creating this monopoly, and the Obama administration didn’t really do anything either to Google or to Facebook as they kind of became these big duopoly entities. The thing there I think with the 2016 election, where you start seeing the misinformation and fake news, and everybody focuses on the Russia part, but I think Russia part is kind of a rounding error. There was enormous amount of other kinds of fake news and misinformation and disinformation. We saw this happening before, right? This was happening in front of everybody’s eyes. I started seeing this big sort of jump around April, May of 2016 in fake news and misinformation. In fact, later, after the election, I saw some empirical data. That’s when a lot of stuff starts getting pumped from Russia also from other sources.
I started writing about it. Other people were writing about it, and we were like, “Whoa, what is this?” And I think at the time Facebook’s people thought they had just taken some flak for some minor thing about Trending topics having a few human moderators to assure quality, which was a good thing, right? There were a few people who were trying to assure quality but in true Facebook fashion, that hadn’t been disclosed, so that created a backlash, and they were like, whoa, we’re just going to let everything be. And I think they thought – my guess is that they thought, we’ll just let the election pass, and like the rest of the world, they thought Hillary Clinton was a shoo-in; she’s going to win, and then maybe they’ll deal with it, maybe they won’t. And meanwhile, of course, they’re providing enormous amount of services to Trump campaign.
Now, I started writing op-eds saying you don’t – Trump’s not a joke candidate; this is a real politician. He is in touch with a segment of this country. He’s got a base. This is a viable candidacy, especially because of Facebook and social media and what’s going on there. That was kind of treated like crazy talk, too. I was like, well, we’ll see, right? And we did see. I think the company should have had more of the data. If I could eyeball it, they should have had the data internally, and it was so loud and visible that you didn’t need some secret access to their opaque data. It was just this flood of misinformation that just was completely unchecked.
Also, there’s a lot of talk about how, you know, Hillary Clinton had these huge digital teams, and they kind of flunked, and Trump’s campaign really didn’t have a lot of competent people, but they were Facebook’s customer. They went to Facebook, which embedded people with the company and said: “This is how you advertise. This is how you target people. This is how you find them.” Now, there’s this current Cambridge Analaytica scandal, where Facebook apparently let some of its data escape to whomever, and the company Cambridge Analytica says that it targeted people in particular ways.
Now, I personally don’t think the company did what it’s claimed to do. I think it’s a bunch of hucksters that are just trying to get more clients, and they now went bankrupt, and they misused data. But Facebook itself is exactly what people fear Cambridge Analytica was, which is this individualized profiling that you could just purchase. And that’s what Trump’s campaign did. They didn’t have a lot of expertise; they didn’t have a lot of data scientists. It’s true. My friends from the Obama years, they were correct that the Republicans are lacking in the number of engineers they have in terms of when you look at the overall thing, but you have Facebook. You can go to a company and say, “Here’s, you know, millions and millions of dollars, and find me the people to advertise to.”
Facebook has these great tools for microtargeting. It also has a tool, for example, that people don’t realize how powerful it is. It’s called Lookalike Audiences, where you upload an email list of people that you think are susceptible or possible to entice your message. You upload and say, “These are our supporters; now go find me more people like it.” And Facebook’s artificial intelligence engine will go find what’s called a Lookalike Audience. If you, for example, upload a Custom Audience of 1,000 people and say, “These are our supporters; find me more supporters like that in Michigan,” and the A.I. engine will go and suss out who those people might be.
They’ll say: “Here. For a few pennies, go advertise to them.” I mean, this is – knocking on doors is great, but it’s really great to have this cheap way to figure out who exactly your supporters might be and to target their eyeballs and do it in a way that’s not visible. …
This is our public space now.
This is our public space now, yeah. I mean, in some ways, it’s both great and horrible. During the initial protests in Ferguson, Mo., in August of 2014, when traditional media wasn’t covering much, I watched as one guy held up a phone and just turned on the Livestream and had 400,000 viewers, right? On social media you can do that. That’s just pretty amazing. On the other hand, at the same incident, Facebook’s algorithm was prioritizing a completely different set of posts about what was called the Ice Bucket Challenge, where people were putting buckets of ice water over them – it was August; it was hot – and then donating to ALS research, perfectly worthy cause. But Facebook’s algorithm was prioritizing those charity donation posts over people writing about what was happening in Ferguson, Mo., this sort of protest that was getting bigger and bigger and that kind of sparked off the Black Lives Matter movement.
What the internet giveth, the social media giants taketh, kind of. The internet allows this world in which when mass media, like the traditional big newspapers or TV of the day, is kind of ignoring this movement, this protest, somebody can hold up a phone and boom, you’ve got an audience that equals CNN’s nightly audience. But on the other hand, you’ve got an ad-financed platform whose algorithm is like “I’ll show this and not that,” and then you’re smothered again, right? It’s this very complicated thing in which you have both this great expansion of free speech, but you also have these new giant players who got new kinds of control, and they’re capricious in their own way, and they’re whimsical in their own way. You’re kind of watching and thinking, will they crush me? Will they let me through? Will they let me have an audience? And you’ve also got all these players that range from foreign governments to ideologues to people just looking to make a buck from the ad money, saying, “How can I just put content out there that serves my purposes?” So it’s quite messy.
Facebook And The Distribution Of News
One of the things about Facebook is that it claims it’s not an editor in a way, that it’s not making a choice.
Facebook often claims that they’re just letting whatever people are posting through, which is both true and false. Since they introduced a News Feed, which collects all the stuff that people are posting and chooses which ones to show first and which ones to bury, they are effectively editorializing in an algorithmic way, which means that there’s no person who sits and says, “You should see this, and you should see that.” It’s not like this traditional newsroom where somebody sits and makes decisions, but they write a bunch of programs and decide what to optimize for. And since they’re an ad-financed platform, their interest is in keeping you on the site. So the algorithm wants to make sure you stay on the site. Let’s say you have 200 Facebook friends, and you have 1,000 potential pieces of post that you could be shown. The algorithm looks and says, “Ah, Zeynep – I think she will keep scrolling or stay on the site if I show her this first and this first and this first and this next and this next.” It just prioritizes according to what it knows about me and what it’s trying to optimize for.
That’s why when you turn on your Facebook on – you turn it on on your phone, when you launch the app or you log on online, you don’t see things in the order they were posted, and you don’t see things in the order that you may want them to be seen. You are shown them in the order that Facebook has chosen to show you, and with potentially 2,000 pieces of content and you’re just going to look for five minutes, it means whatever is in the top 10, 20, that’s what you’re going to see. Whatever is 1,993, you know, you’re not going to see this. It’s kind of like Google. Google doesn’t hide stuff on the web, but if you search for something and if it’s in the first page, you’re going to see it. If it’s in page 83, might as well not exist. That’s what Facebook’s algorithm does. A lot of people don’t really notice this. They don’t notice that Facebook is not showing you everything. If you post something and nobody responds, is it because they have nothing to say, or is it because the algorithm has chosen not to show it to them?
You don’t know, and that editorial control through the algorithm is quite new, because we don’t really know how to think about it. We’re used to a bunch of people in a newsroom saying, “Let’s put this on the front page.” That’s not – it’s personalized person by person, and it works by crunching the data on you. That’s why Facebook is also a surveillance machine, because for the algorithm to work, it needs all this data to crunch about you, right? Facebook collects enormous amount of data about you by tracking people as they browse the web. It purchases external data sets and matches them with people. It’s now trying to track people’s offline behavior, like if you go into the store or something, it just wants to eat up all this data exactly so that the algorithm can work better, because they work – the more data they have, the more they can target.
So it’s actually worst of both worlds in my view [in] that you’re creating this enormous database, this enormous profile of billions of people, and you’re also using that data to accurately figure out what they are, what their vulnerabilities are, who they are, what kind of person they are, what they might be susceptible to, and then you’re going and selling, “Hey, whomever you are, pay us, and we will let you find exactly the people you want, because we have the data to target you. We will also use this data we have on people to keep them on the site as long as we can since they have to log on; that’s how they keep in touch with their friends and family.” I think the business model is really at the heart of a huge amount of the problems that Facebook faces, because it’s not a bad thing to have a platform where you can go connect with people – you know friends, family, or otherwise. I think it’s great. I use it, right? This is a great thing to be able to connect.
The problem is this connection is financed by a business model that brings all these ills with it, and the company seems set in that business model because it’s making a lot of money. It’s just going [along] its merry way, but all the problems – these externalities, these issues, like the pollution almost, is just dumped, right? You got this company that’s burning a lot of dirty coal, and it’s just spewing all the dust and smoke into these cities, kind of like into the world. But they’re very profitable as long as you ignore the pollution.
And there are very few controls, if any.
At the moment, I don’t know if there are any controls, right? We have a European regulation that’s about to come online in just a few days in May. We have a European regulation on privacy that may change things a little bit, but I don’t think that’s going to really change things a lot either, because one, the company is 2 billion people, its user base. You don’t even see what’s exactly going on. That scenario I outlined, with Facebook being able to shift elections with showing messages, if it did that, in lots of countries around the world, and if it didn’t tell anybody it was doing it, you’d have no clue, because once again, this is our public sphere, but it’s not visible to the public. It’s happening screen by screen, so who knows what’s going on?
The Algorithm And The News Feed
Just explain the Ferguson thing again, just simply what it is that you observed.
So a lot of people – because I study social movements, right, a lot of people in the social movement world are super-excited – were super-excited after the Arab Spring about the potential of Facebook to spread their information and ideas out there. So that does happen, right? It does happen. But there’s a twist to this, which is Facebook’s newsfeed algorithm, and a really wakeup call for me happened in 2014 when there was an incident in Ferguson, Mo., when a police officer shoots and kills an African-American teenager who was unarmed, right? And this wasn’t the first such incident that had happened in those months. In fact, if you look at the statistics, there’s a lot of that that has been going on, so there is a lot of anger in the community, and there had been these online conversations about an incident where a vigilante had killed a young boy – no, sorry. There had been a few instances before, so there was already a conversation. In Ferguson, Mo., the community starts holding this funeral mixed with protests. They gather together. They’re grieving; they’re angry; they want answers. And the city sends in police, and there was a tornado or something nearby, so there were a bunch of national reporters that happen to be near, so they go there. And we start seeing these pictures on Twitter, and the pictures are quite striking, because what you’re seeing is armored vehicles, and you’re seeing people on top of them. They’re kind of like in a sniper position. And you’re just looking at this, and you’re like, “This is the suburban U.S., right?” This is militarization of police, and this is a community that has just lost a young man, and they’re angry and grieving, and they start bringing dogs, and you’re like, “What is going on?”
So on Twitter, there’s this conversation about what is going on, and then two of the reporters from national outlets, one Washington Post and one Huffington Post, they’re in McDonald’s charging their phones and using the Wi-Fi because that’s what protesters and reporters around the world do – you go to a place to charge and get on the internet – and the police comes and says, “All right, leave the place.” Of course they’re reporters, and they’re like, “Why?” They’re not used to just doing whatever the police tell them. And police are like, ”All right, you’re arrested.” But they don’t get a chance to tweet it out. But somebody takes this hazy picture, and what we see is Ryan [Reilly of Huffington Post] and Wesley [Lowery of The Washington Post] being stuffed into a police van in this hazy picture, right, the two reporters. At this point, my friends in Bahrain are like, “Are you sure this isn’t Bahrain?” My friends in Middle East are following all this because it’s such a familiar scene – police overreaction, journalists being arrested. And my whole Twitter feed is people talking about what is going on. You know, “Where is Ryan? Where is Wesley? What is the sniper thing? And why are there armored vehicles?”
So I go on Facebook at the same time. I’m like, all right, let me see what my Facebook friends are saying, because, you know, I study social movements. I have so many friends in social movements, and I thought, surely they’re all talking about this. And … there’s nothing. There’s some posts about my friends doing the Ice Bucket Challenge, which is this really cool thing where you take a bucket of ice water and dump it on your head and then you donate to ALS research and then you tag people and you say, “You post the video,” and you say, “Now you do this,” right? There’s a lot of posts about that. I’m like, OK, I guess my Facebook friends aren’t talking about it. So I go back to Twitter. All my Twitter friends are talking about it. My Twitter friends around the world are talking about it. There’s nothing but Ferguson as a conversation. Go back to Facebook – babies, engagements, Ice Bucket Challenge. And I thought, wait, wait, wait. The algorithm, like the News Feed’s algorithm is arranging this for me. So I go and find –
It takes me some time, because Facebook doesn’t make it easy for you to turn off this algorithm because its business model requires that algorithm. And I turn the algorithm off, and I start seeing things chronologically, and I’m like, my friends are talking about it; it’s just Facebook’s choosing not to show it to me. Go back to Twitter. They’re all talking about it. Go back to Facebook; it’s all Ice Bucket Challenge. So what had happened was that Ice Bucket Challenge was really algorithm friendly, and it was easy to like, right? Everybody clicked on “Like.”
It had a video, the algorithm apparently liked videos. And you tag people, so that was social interaction. So the algorithm was effectively suppressing and censoring the story of what would become the Black Lives Matter movement and create this enormous national conversation about a really important topic. …
That’s a pretty frightening thought.
… In fact, you already have places, as I said, that have Facebook as their main interface, as the internet, and they don’t have – like here in the U.S., people think, oh, look [at] all the damage this kind of stuff has caused. Well, there are places that don’t have the many, many institutions we do have in the U.S., like an independent press that’s relatively well funded compared to the rest of the world and has a lot of legal protections, right? They don’t have anything but Facebook. In fact, this is kind of why a lot of the early warnings about Facebook’s power came from the Third World. They’re more susceptible to it. But because they’re not in Europe, and they’re not in U.S., they kind of got ignored. We would have been much better off had we paid attention to what happened in poorer countries or less developed places, because they were already seeing these effects and warning about it.
When is it that you start to kind of recognize – and we may have to break in a little bit – when is it that you start to recognize the algorithm as kind of problematic, or is that – ?
The day they introduced it. The thing is, when they introduce the algorithm, it becomes immediately apparent that’s a gatekeeper. This is the kind of thing that you do not need 10 Ph.D.’s to figure out. If you’ve got a computer program that’s deciding what to show first and what to bury, and is doing it by crunching a lot of data on you, that is a powerful gatekeeper, right? The fact that it is this complex, powerful gatekeeper, if it surprised Facebook, that would surprise me, because what else do you expect it to be? You put in a gatekeeper, of course it’s going to act like that.
And a gatekeeper to what…?
It’s gatekeeper to what information you see in the world, what social interaction you see, what you see from your friends and your family, what you see from news organizations, what you see from institutions, because once again, attention is the scarce commodity. We don’t have 24 hours in the day free to check all the possible things, right? We have lives; we have jobs; we have things to do. So you spend a little bit of time, and a lot of social interaction and information gathering happens through social media.
Facebook In Myanmar
You want to talk Burma 2013?
The thing is, when I say countries where Facebook is the internet, a lot of those countries are places without a lot of institutions that help temper some of this for U.S. and West. In fact, one of the things that Facebook does is it tests its products on low-bandwidth environments, right? Not everybody around the world has a smartphone, so Facebook has engineers who are given feature phones, like these little flip phones, and you’re trying to use it on Facebook on that phone.
The idea is that we shouldn’t assume that everybody has Western levels of more expensive technology. The problem here is that what Facebook doesn’t do is test its product on low-institution environments. It tests it for low tech. It doesn’t test for low institution. A crucial example of this is the situation in Myanmar, in Burma.
Can you say that again but explain what you mean by low-institution environments?
Right. What Facebook does is it tests its products on places where technology is cheaper and more primitive. What it doesn’t do when it expands like this to the globe is test its product in places that don’t have the same kind of institutions of liberal democracy that we kind of take for granted at times, like an independent press, a functioning judiciary and things like that.
A crucial example here is the case of what happened in Myanmar, or Burma, as the country was once called. Myanmar transitioned from authoritarian military government to a democracy, and it seemed like this really hopeful story. It was led – the movement was led by a woman who got the Nobel Peace Prize, and the country was really flooded with cheap phones and SIM cards, so people started getting connected. It was really fast.
Facebook also went in and spread very rapidly, especially on people’s phones. So what we started seeing very early on was that there was unfortunately an enormous amount of hate speech demonizing the country’s poor Muslim minority, which is really like the Rohingya people. They’re Muslim, but they’re kind of this powerless minority in the country.
There was this enormous amount of hate speech on Facebook that was just spreading like wildfire that was led by extremist Buddhist monks, and it was the kind of stuff like if you looked at Nazi Germany, it was a kind of blood libel stuff that you would expect the Nazis say about Jews: They will kill your children; they will do this; they will do that.
It was apparent – this was a problem early enough that in 2013 I posted a tweet basically saying that we are eventually going to see a social media-fueled ethnic cleansing campaign, and I think it will be Burma. This was 2013. I’m not really a Burma expert per se, but I had been – I had my ear enough to the ground to be hearing all this. And over the years, I know that people, actors from Burma and civil society people from Burma begged Facebook. They even collected data – what data they could – and they went to the company and said: “Please don’t let these extremist Buddhist monks spread all this misinformation and hate speech. We’ve already got the ethnic cleavages, and it’s just spreading like wildfire in the country.”
And to this day, the company is badly lacking A) enough people who speak Burmese and who are intervening. In the meantime, Burma/Myanmar has seen this major ethnic cleansing campaign that has resulted in the second biggest refugee outflow in the world after Syria. A U.N. report has found that Facebook was instrumental to the spread of hate speech in the country, and that these monks, for a long time, could easily go viral.
There was a piece just recently in The New York Times where they went to interview people, and people were like, “We know Muslims are horrible; they’re doing all this stuff.” “And how do you know?,” and the person said, “I saw it on Facebook.” Earlier, BuzzFeed, which does great reporting, sent reporters, earlier than The New York Times, Buzzfeed sent reporters to document how Facebook was so instrumental to hate speech spreading in Myanmar.
After the U.N. report finally and after all of this, there was some acknowledgment, because in the congressional hearings, Mark Zuckerberg was directly asked about it, and he said, “Well, we’ll try to do what we can.” I don’t remember exactly what he said. So this is a sad and striking example, because Facebook might say it was blindsided, but if I could see this in 2013 – and again, I’m not specially an expert in Burma – if I could see this, if I could hear of this in 2013, a company with half a trillion market capitalization should have seen this. And the way I imagine an ethical company would work is Mark Zuckerberg would wake up in the morning and say, “Give me an update on what’s happening in Myanmar,” because this is not how you want to go down in the history book.
… If, like me, you’re following this stuff, you see years and years and years of people begging and pleading with the company saying, “Please pay attention to this,” at every channel people could find and basically being ignored. And when finally publicly called out by Congress and by the United Nations, they’re like, “All right, we’ll now try to hire people.” Well, you should have done this long ago. I think what is happening is that this company is way in over its head in terms of its responsibilities.
It’s way in over its head in terms of what power it holds and this recognition, acceptance that it’s in the people business. It shouldn’t have come down to this when this much of ethnic cleansing already [has been] happening for them to try to finally hire a few dozen people. And even that’s not going to be enough.
Once it gets to this point, of course, and I want to be clear: The idea isn’t that it’s just like you magically add Facebook and horrible things happen. You already have the brewing ground; you have the government kind of participating in this, too, but you have Facebook as this effective gasoline to simmering fires, and they’re not doing what they could given what their resources and their responsibility and their power in this whole thing. This is not the [only] country where we’re seeing something like this.
And what they could do is to basically censor content or to moderate content. What could they do?
Well, lots of things. One of the things is if they had enough people watching the country, the kind of horrible hate speech clearly violates Facebook’s Terms of Service anyway, right, so they could have not let it spread in the first place. In Germany, where there are also similar hate speech laws about Nazi speech, right – obvious why given its history – Facebook has thousands of people. But Myanmar, it’s just this poor little country, so it gets ignored. It’s far from Menlo Park. It doesn’t register it, doesn’t have regulatory power; it doesn’t have that kind of power. If you are the megaphone to what is essentially hate speech calling for ethnic cleansing, and thousands of people are dying, and thousands of people – like right now we have millions of people who have fled the country, which is causing – it’s destabilizing Bangladesh, because they fled into Bangladesh, and they’re stuck in camps there. You have a responsibility to do what you can, and it’s a complex problem. It doesn’t mean if they did everything they could, maybe all this happens anyway, but at least they could have done everything they could, rather than being this easy gasoline being poured on these simmering fires.
Mark Zuckerberg And The Facebook Mission
You wrote this article about the apology tour. How can you sort of – how can you describe Mark Zuckerberg’s – if you’ve been in a weird dialogue with him over the years, right, how do you explain the trajectory of what they were out there saying and what the reality was?
What I see is a young person in over his head, because this is like the company going from just a tool where the first thing he did is posting, like scraping people’s pictures from the digital yearbooks and having students rank men and women as who’s hotter, right? It’s kind of like this college thing to having 2 billion people as their user base in about 14 years. That is a very rapid thing. And Mark Zuckerberg, when he became a billionaire, was he 20 even, right? When you are a billionaire at such a young age, your world is full of people who suck up to you, and everything looks great.
Of course you think everything’s awesome, and you’re an optimist. And the tool has gotten so much bigger. Sometimes I think there’s a sort of caricature of Mark Zuckerberg, and that’s partially aided by that movie Social Network, which I think was completely false and absolutely misled people to what the problem is. Which, like Aaron Sorkin’s view of Mark Zuckerberg is somebody who’s after status in Harvard and is doing that. That’s Aaron Sorkin’s fantasy because that’s old-world kind of thinking. What we have instead is young techy people who come from a very narrow slice of humanity. They’re technical; they’re kind of geeks. They went to elite colleges – Harvard, Stanford, Carnegie Mellon. Great computer science programs in these places. And then they created technical tools that ended up being major people tools like this, very fast.
Sometimes you see this image in media as if you have a few evil people destabilizing the world and creating upheaval. On the other hand, I think it’s the opposite. I think we have a large number of well-meaning people who are, again, in over their heads. They come from a very narrow slice of humanity. A friend of mine, he compared this to a bunch of kids given a bulldozer. They’re like, ”Whee!” You know, it’s really great fun. What you have here isn’t a bunch of evil intent. What you have is a structural push by the business model and a bunch of people at the helm who don’t really grasp what their tool is and what their product is. In fact, in the congressional hearings, Mark Zuckerberg was asked a bunch of questions about does Facebook collect this data, does Facebook do this, does Facebook, and he didn’t know, or he gave the wrong answer. And I was kind of watching, and I’m like, wait, I know the answer to this. He’s the CEO of the company; how does he not know? I think he’s not in the weeds of the business as much.
And, you know, he’s just one person who happened to be in the right place at the right time with a product that actually a lot of other people were developing, too. It was just one among many, and one of them was going to take off, and it happened to be his. In my view, this extensive focus on him and his sort of what kind of a person he is is almost misguided, because if you took him – and I wish it was like just the Mark Zuckerberg problem, because then you organize a rebellion in Facebook, and you take out Mark Zuckerberg, and you put in your good king, and then you solve the problem.
It’s actually much worse than that, because it almost doesn’t matter who runs the company as long as the incentives that run the company and the regulatory environment that lets the company be this way is this way, because this is where it’s going to end up. If anything, this is something that people find hard to believe, because I’m such a loud Facebook critic over the years, but because I have so much internal conversations with these people, there’s a lot of stuff they hold back in terms of surveillance and targeting they could be doing. And they hold back because they have half a trillion market cap and they can kind of hold back.
A scenario in which you have a different Facebook that’s completely unmoored from Mark Zuckerberg and that’s completely run by Wall Street – there’s a lot worse they could do. And this, again, comes like me saying this is hard to believe as such a loud critic of them, but I’ve seen this over the years. If anything, we have someone who I believe doesn’t fully understand the impact [of] what the company he created has unleashed in the world. This is why I don’t think like – I wrote an op-ed right before he was about to testify because everyone was saying: “You’ve been writing about Facebook for so long; you’ve been criticizing them; you’ve been right about all of that. What should we ask?”
We as a society should have these conversations and say, “Facebook, Google, Twitter, this is what you’re allowed to do, and these are some of the safeguards you can do.” And this should be accompanied by industry self-regulation. Industry should come together and say, “OK, here are all the big questions, and here are some of the metrics, and here are some of the ways of doing this.” And they should do all of that, too. I think the important thing here is not to focus too much for me on what one person could do, because I don’t think he either has that kind of power or I think he shouldn’t have that kind of power. The questions are for us to answer, not for him to just decree, because who died and made him the king of the public sphere, right?
What about going back in time, though, about his apologies?
Right. After the Cambridge Analytica scandal, there was this radio silence from him and Sheryl Sandberg, who is a really influential player who doesn’t get a lot of attention. She helped put in that business model that’s been so destructive. So they went on this tour, both of them apologizing, saying, “We’ll do better, we’ll do better, we’ll do better.” Now, for a lot of people who hadn’t been paying a lot of attention in the United States, 2016 was a turning point. They started paying attention to Facebook. So they might just hear, oh, the company got blindsided, and now they’re apologizing; they’ll do better. I was like, wait, I’ve been here; I’ve watched this. The first apology that Mark Zuckerberg publicly put out was before he founded Facebook, for the Hot or Not scraping platform he created in Harvard, where he had students rank other students to their hotness. So he apologized for that, and then he created Facebook.
… “Oops, we were blindsided, and now we apologize, and we will do better.” There were so many that I couldn’t actually find every instance. After I published the article and I listed the whole amount, I was like, “Oh, wait, I forgot that one,” because they had just been nonstop apology. That’s been Facebook’s tactic is to say, “Oh, we were blindsided,” when in fact people had been warning them, pleading, begging, for years and then to apologize. But it’s after the fact. I was recently at a conference in Europe, and they have similar problems there, and there were Facebook representatives there. It was the same thing. It was kind of like it’s like a playbook. They’re like: “We hear you. You’re concerned. We apologize. Of course we have a responsibility. We’ll do better.” The public record here is that they are a combination of unable and unwilling to grasp and deal with this complexity, so the apologies, to me, that’s not what we need.
Mark Zuckerberg early on starts calling Facebook a community.
Still does, yeah.
What did you make of that when he starts using that term?
So very often, Mark Zuckerberg refers to Facebook as a community. In fact, he did that recently, too, after the Cambridge Analytica scandal. Well, if it were a community, right, a community is a group of people who have some reciprocal responsibility to each other. If we’re not a community – for one thing, Facebook itself is 2 billion people, and they’re all in lots of different communities with each other. In terms of their relationship to Facebook, it’s one-sided, right? Facebook decides what the design is. Facebook decides what kind of data it’s going to collect. Facebook decides, “All right, we’re going to pivot to video,” and then all of a sudden, the algorithm likes video, right? It’s not like you’ve got a say in this. Facebook’s algorithm decides what you see first and what’s buried, and we don’t really get to have a say in shaping all of that. Also, 2 billion people – There’s a lot of communities in there.
… In fact, you sort of see this – you see this mixture of naivete and arrogance in the way Facebook and Mark Zuckerberg have defined things over the years. Early on he took a lot of fire for saying: “Why would you want to be two different people? You should use your real name and have everything be public and the same,” because he saw people wanting to have different aspects of themselves as a kind of hypocrisy, as he described it. Now he later said no, I changed my mind, but for a long time, Facebook was designed exactly like that. It just kicked you to being public and public and public. Well, the way I look at it, if you’re a billionaire at 20, it sounds great to be yourself all the time, because everyone’s kind of sucking up to you and just saying nice things to your face, whereas for normal people, there are social rules, right? You’re not the same person with your colleagues as with your parents as with your kids as with your lover. You have different aspects of you. They’re all you. But they’re these social roles. This is such an obvious thing that if a sociology student I taught came up to me and said people should be the same to everyone, that’s the way people are, I would just flunk this person as not having read the first chapter of a sociology book. These are basic social science things that you don’t even need an education to kind of – so then Facebook moved on that.
Then it went into the next – I think the next one was something like connecting people to one another, “Making the world more open and connected.” I was like, this isn’t any better. Making the world open to what? Making the world more connected isn’t always – like connecting Nazis to one another, connecting white supremacists or people who want to go on an ethnic cleansing rampage, that’s not a good thing. This isn’t like this neutral value. I think right now the slogan is something about bringing people closer [“Bring the world closer together”], and once again, closer to do what? They keep wanting to have these broad slogans that sound nice if you don’t think about it. But as soon as you think about it, they sound either naive or arrogant, because even if you could do that, not even clear that’s a good thing to do. It’s not clear you can do one thing at the scale of 2 billion, and there certainly is no single community of Facebook users. There’s 2 billion users, and there is Mark Zuckerberg who has the controlling stock in a company that’s really big.
You see this sort of philosophical incoherence, and [they are] not able to grasp this huge tool they’ve created that has all these impacts. I think that’s partly how they kept thinking – you know it’s like this. They’re in this giant boat that’s leaking, and they’re like: “Oh, tiny thing. We’ll patch this; we’ll patch this; we’ll patch this.” I’m like, it’s the whole thing is made badly; you need to change that. And there’s constant Band-aids here and there. Of course this is not a real sinking ship. This is a very rich – they’re putting the Band-aids here and there as they get called out for stuff. Meanwhile, they’re making a lot of money and being very successful business-wise. After all these scandals, Facebook’s profits are still going up, so they don’t really have a huge incentive to change the core problem, which is their business model. But as long as that remains the same, I think we’ll just keep hitting these icebergs, and they’ll put a Band-aid and apologize.
Facebook As A Surveillance Company
If you want to situate us back when they were sort of developing this business model as opposed to with the knowledge that we have now – they were – what were they – as if they were building something?
The internet, once it started, right, it didn’t really have a common business model. There were a bunch of people, they put up their webpages, and the question was, “Who’s going to pay for the servers?” And very early on, the ad financing, you would take some ads, put it on the site, and that would finance the servers, that kind of took [off]. We ended up, both in Facebook and Google and most of the digital economy, we ended up in the public sphere with platforms that are financed by ads, kind of like newspapers were, but it papered over the difference, in that when newspapers were financed by advertisements, which was also a historical accident. What happened was it had unhealthy effects, too. It wasn’t great. But the ads didn’t reach person by person by profiling them, and it wasn’t a secret. Everybody could see if you bought an ad on The New York Times. If you bought an ad in CBS News in the evening, millions of people saw it at the same time. That tempered some of the harm that came from this, because you weren’t picking up a person using artificial intelligence and data collected on them to figure them out and targeting just them without anybody else seeing. You were just kind of hitting people over the head with a big baseball bat. This is 20th-century advertising, whereas Facebook advertising or Google is a scalpel that comes at you and pushes your buttons directly at you, and it’s not public. And that is the core structural problem that’s set up. And also the way modern algorithms work, they are called machine learning. They are not programmed like a recipe; you don’t give them step-by-step instructions. If you took a computer science course, that’s the kind you learn. You just learn how to write step-by-step instructions. Modern algorithms are a totally different animal. They work by crunching enormous amounts of data.
What you do is collect tons of data. You feed it to the machine learning, and it just crunches, crunches, crunches, and creates these giant matrices that make decisions. The thing here is structurally, the incentive [is] for them to be a surveillance machine, because since you’re not giving instructions, it’s just eating the data. It has to have a lot of data to eat to do what it does, so you will see in the current digital economy in which a company like Facebook is incentivized to collect as much information as it can about you, surveil what it can directly, collect what it can through tracking pixels, that are around the world, tracking you online even if you’re not on the app. If you’re on Facebook, it’s collecting everything you do. If you’re off Facebook, it’s using tracking pixels to collect what you’re browsing. Increasingly it wants to track people offline. If you go into a store, it wants to kind of match you to your Facebook. It wants to purchase – it does – like in the U.S., they purchase all sorts of data about people. They profile people who are not on the site by using contact lists, like phonebooks that other people upload to Facebook. If you’re on them, Facebook goes: “Oh, here’s a person whose phone number is on these people’s phonebook. But this person is not on Facebook, but here’s a person. I know where they are on the network.” Creates a shadow profile of non-users. It does all of this because it works as a surveillance machine. For its algorithms to work and for its microtargeting to work – for its business model to work – it has to remain a surveillance machine. …
Can you briefly explain what platform was and why that was – without tying it to Cambridge Analytica, but what that was at the time to attract third parties to – ?
… Until 2014, Facebook had this idea that it was going to be a platform for other apps, and what you could do was if you’re an app developer, you could create this app, and if somebody downloaded your app – and these apps are all sorts of things. A lot of them were quizzes – “Which Star Wars character are you?” – You answered a bunch of things. It was fun. A lot of them are games; they were fun. If one person downloaded that app, Facebook allowed that app to have access not just to the information of the person who downloaded it, but a lot of detailed information of all his or her Facebook friends. So if I downloaded a quiz app, “Which Star Wars character am I?,” I think Chewbacca, right? So if I downloaded that app, and if I had 300 Facebook friends, that app could just siphon off 300 of my friends’ detailed information – their religious preferences, their political preferences, very detailed information about them. I’m not sure about the religious – but that app could siphon off an enormous amount of detailed information.
If, let’s say, you had 100,000 or 200,000 download your app, you could potentially grab enormous amounts of information about say 40, 50 million people. The way Facebook did this, I think it was partly they had this weird vision that they were going to be a platform – ask them, not me – and also it was a way in which they incentivized a lot of developers to create apps for Facebook, because if you developed a fun app for Facebook, all of a sudden you had this goldmine where you had detailed information. If you could just get a 100,000 people to download your app, all of a sudden, you had detailed Facebook information of, say, 30 million people. It was just an enormous goldmine.
And there were thousands of such apps. The recent scandal has focused on one such app deployed by Cambridge Analytica. I would be surprised if every single person who was on Facebook at the time, which was about a billion people, which is 2014 when they shut off access, has not had their data siphoned this way. If you were on Facebook on 2014, I would wager folding money that some app took your information, because we don’t have detailed information. Only Facebook knows which apps access what.
… Once again, making the site useful or fun or engaging or entertaining allows Facebook to both collect more data about you, because you’re doing more stuff on the site. It also allows it to serve you more ads. It goes back to the business model.
You know computers really well, and you’ve studied this stuff, and you still have a difficult time or you’ve had a difficult time with figuring out your own privacy settings?
I am confused about privacy settings. Pretty much every week or month, I discover something I thought I had turned off, now I’m like, “Oh, wait, it’s on.” And then there’s something else that I think I have control, like it got to the point where I think since 2008 or so I treat Facebook essentially as a public space, because I have no trust in my ability to understand its privacy settings. I basically treat it as if everything I post there, including messages, are essentially completely public, because I don’t really understand what exactly is public and private despite studying this for a long, long time, which is kind of a pity, because, I mean, I left my home country. I have friends all over the world, and a lot of them are on Facebook, and I wouldn’t really mind having more interaction on Facebook. But the confusion over the privacy settings and still discovering new stuff has made me wary of using the platform the full way I could. In fact, I think this is sort of this key thing, is that having control over your privacy is what allows you to be kind of public or not public, because everything is constantly exposed. You just shut up. You’re like, “I don’t want to say anything,” because if everything is constantly exposed, you’re really under that harsh glare nobody wants to be.
So there are no constraints as of right now on how they use our data?
As of right now in the U.S., there are a few constraints on financial and health use of data that come from laws that concern finances and laws that concern health. We have some health privacy legislation, but vast amount of the potential uses are not legislated for, because who knew? Who knew you could predict people prone to depression, and you could also probably predict people prone to being dissidents, and you could probably predict who’s going to be pointing out workplace violations in your workplace, right? There’s all these things you could be doing at scale cheaply that we have no laws to guard against, because this law, when they were written in 20th century, we didn’t have technology. It’s kind of like saying it’s illegal to mind-read. Well, it’s not a law, because we can’t mind-read. You don’t have a law against something that isn’t possible.
We have fairly few protections, and what protections we have, the companies keep violating, too. There was a recent scandal where they were letting people target [housing] ads and exclude some ethnicities, so you could say, “Show this ad to white people and not to African-Americans.” This is blatantly illegal in the United States, and the fact that Facebook didn’t have lawyers that said, “Whoa,” you know, said there must have – there should have been people who said, “Look, here are all these things we could do.” The paranoia team, somebody should have said: “Let’s figure out. Let’s not do this. Let’s not do this.” It was I believe ProPublica who caught this. This is once again another example. They don’t really – I don’t think this is intentional. As soon as they were called out on it, they’re like Oh, my God, sorry, sorry. And they fixed it immediately. But the fact that it was an external third party that pointed out such a basic thing kind of shows you the company is unable to handle the power of its tools.
What was it like for you in 2016 as the public sort of catches up with where you’d been for almost a decade?
Well, it’s not a good thing to be right about, right? And this is the part I emphasize, is that I’m actually an optimist, as are most of the critics. We’d been warning about this because we thought this can be fixed, we wanted it fixed, and it wasn’t. You saw all sorts of the spread of misinformation and the online secretive targeting and all those things happen, and a lot of the U.S. establishment kind of shocked by Hillary Clinton’s unexpected loss. Very few people were expecting it. We’re like, “Whoa, what are these tools?” And Silicon Valley, which tends to be more liberal or libertarian – the workers at least; not necessarily the managers and executives, but the workers – a lot of people were like: “What did we create? What are we building?” Because it’s not just losing an election. It’s that there is a lot of misinformation and all of that.
… So that’s where I find myself, is that I think there’s a lot to be done, but we’re still not having the tough conversations, the difficult conversations as a society and saying, OK, it’s like we’re in the aftermath of the printing press, but everything has been compressed to a few decades. We don’t have hundreds of years. [In the] aftermath of the printing press and the Industrial Revolution, there were many great things that came out of it. Modern medicine has meant that we’ve basically won a war against infectious diseases, and that’s helped humanity greatly. But we also had hundreds of years of war and two global wars and all of that that came also with this upheaval, right?
The question is like now we have the digital public sphere, and we have artificial intelligence, and we have all these complicated things. Everything from employment to inequality to polarization, they’re all wrapped up in questions of technology, and my hope here is like, can we fix these things, which we can if we have the political will. Can we fix these things before we have a genuine catastrophe that’s global and that’s horrible, right?
Is it fixable as long as one or two or three major private corporations control that space?
The fixing of all this would clearly involve not having two companies, Facebook and Google, basically rule our digital public space. But it would be more than that. You don’t want to just break Facebook up and have five ruthless Facebooks that are operating under the same incentive structure, or same with Google. Now, the thing I’d like to point out is that Silicon Valley thinks of itself as this really innovative space or markets itself as a very innovative space. In reality, they are stuck in a very boring, narrow, dull little part of technology.
They are ad-financed, both Facebook and Google. They just collect data, throw it at machine-learning algorithms, and they’re printing money, right? There’s an enormous amount of technological innovation. And I say this right now – I’m putting my technologies hat on – that I read about that nobody’s investing in [them]. They’re all in this narrow slice where these two companies making an enormous amount of money. Right now, a lot of artificial intelligence startups in San Francisco, Silicon Valley, all they’re doing is trying to be bought up by Facebook or Google. This is not innovative.
There’s all these technologically interesting things we could do, all these different business models that could be developed that would let you have all the fun stuff we do with technology, all the conveniences, all the social interaction, like all things that I think are normatively good. Without this pollution, without this radioactive waste, without this surveillance, without this social control, all these things. But they’re not going to do this as long as they’re doing so well financially and there’s no regulatory oversight, and consumer backlash doesn’t really work because I can’t leave Facebook.
All my friends and family around the world are there, and I’ve tried getting people to use other mechanisms, and this is where they are, so this is where I am, right? We don’t have tools yet to make the move.
And these companies are on such a profitable gravy train, even if there is a lot of well-meaning people in those companies, they’re not getting off that. So I would like them to get off this little narrow thing and explore – not just them, but really have genuine innovation and not have a moment in which like 10, 20, 30 years down the line, where there’s so much upheaval in the world, where we can’t really fix it or we have a lot of things. We don’t want to see the world burn down the way it did with World War II before everybody comes to their senses and says, “Can we do this a different way?” … What you need to do is kind of have that kind of big vision or how do you deal with technology in the 21st century? But hopefully, like I don’t want to be retiring and everybody saying, “Oh, you were so right.” I would rather be totally wrong about the downsides because we avoided them. I would love for people to come and say, “You had all these scary scenarios, and none of them happened.”
I’d be like great, that was really good. I think that’s really possible. Otherwise why bother doing all this criticism? This is the part that a lot of people miss about the critics is that being a critic is to be an optimist. You think it can fixed. Otherwise why do all this trying to explain? And that’s what so many of us have been doing in this space for so long because we believe it can be better technically, socially, politically, financially. Different models that are possible and it’s time to go explore them.
originally posted on pbs.org