The Facebook Dilemma | Interview Of Sandy Parakilas: Former Facebook Platform Operations Manager

The Facebook Dilemma | Interview Of Sandy Parakilas: Former Facebook Platform Operations Manager
The Facebook Dilemma | Interview Of Sandy Parakilas: Former Facebook Platform Operations Manager

Sandy Parakilas was the platform operations manager at Facebook from 2011-2012. He is currently the chief strategy officer at the Center for Humane Technology. This is the transcript of an interview with FRONTLINE’s James Jacoby conducted on February 28, 2018. It has been edited in parts for clarity and length.

OK. Yes, actually, let’s start there. What’s it been like for you to kind of come out of the tech shadows and talk these days?

So I wrote a piece that ran in The New York Times back in November [2017], just after the hearings happened, the congressional hearings with Facebook and Twitter and Google and at the time I was … It was very difficult for me to write the piece. I was concerned about my relationships with people who worked for the company. You know, I just …Like, being a whistleblower isn’t any fun. Right? There’s nothing about the process that is particularly enjoyable. But I felt that I had a responsibility as someone who had more of a sense of what was going on than most people do because of having worked at the company, I mean, seeing how they operate. And I was just concerned that they weren’t going to be held accountable in the way they had to be if the country was going to move forward.

And so once I wrote the piece and got it out there, I was overwhelmed by how supportive everyone was. Everyone, meeting other people in the tech industry, former Facebook employees, people in government, people I hadn’t talked to in years would reach out and say, “Thank you so much.” So it was really an overwhelming experience to understand how much people wanted someone who had this perspective to come forward.

And how unusual is it in Silicon Valley for people to come out and talk about the inner workings of this place?

It’s pretty unusual. What does happen is people talk to the press on background a fair amount. There’s always been a certain amount of leaking but it’s pretty unusual for people to come out publicly and criticize the company that they used to work at. So that’s a new thing. I think the fact that I have come forward and a number of other people who’ve worked at Facebook and also Twitter and some of these other companies have come forward and have been talking about these issues really openly shows you how much of a difficult situation we are in with respect to technology and democracy at the moment.

Technology And Democracy

So technology and democracy – what do you think the problem is? How do you diagnose the problem that emerged in like 2015, 2016, 2017, as you see it?

There are actually a bunch of different problems that are all related but they are separate. There is a truth problem, which is that we have moved from a media landscape where there were arbiters of truth, to use a phrase that people in tech sometimes use. You know, the media companies determine what was fit to air and what wasn’t and so that was either put forward or not. And now we don’t live in that world. Anyone can post anything and the tech platforms can only react. And so that loophole, so to speak, has been exploited by a number of bad actors, whether it’s conspiracy theorists or foreign state actors or a number of others. So we have a big problem with the truth.

Another big problem we have is that the targeting capabilities of the technology advertising platforms are so powerful now and they know so much about you that the ability to manipulate people through incredibly targeted advertising is at a whole different level than it was even 10 years ago. You know the Trump campaign famously was running, I think, 100,000 different versions of ads at the same time. That gives you a sense of how targeted this can be. On Facebook you can show an ad to as few as 20 people at a time. So you can get a custom list of just 20 people and you can show them and advertise [what] is really targeted to that tiny, tiny group of people. So that is also a new thing.

And then I’d say the last thing is that the technology companies have gotten to a level of monopoly power that has not been seen in this country since the Gilded Age. It has been very, very difficult for government to regulate them because of their power, because of their ability to shape the information landscape. And so there has been a lot of reticence to take fairly straightforward, common-sense actions that would prevent some of the stuff that happened from happening.

So bring me into your story a little bit in terms of what happened over the election and your sort of awakening and your wanting to own up to having been a part of creating this. Where would you start that story in your kind of own journey recently?

So I was working at Facebook in 2011 and 2012, and I worked on a lot of privacy issues and I was frankly concerned by what I saw at the time. I had flagged, for example, that I had some concern about foreign state actors using Facebook data to do nefarious things. I hadn’t thought of the idea that we might do that in the U.S. I was more concerned with the Syrian government, for example, taking action against Syrian activists inside Syria – this is during the Arab Spring – using Facebook data.

And so while I had concerns while I worked there and then after I worked there, I stayed quiet because, frankly, out of self-interest, I didn’t think that what was going on was a true emergency, and it was much easier to not talk about it than to talk about it. And what happened in 2016, for me was just absolutely terrifying because I saw that a lot of these tools were being used. Tools I had some ever small part in helping to build were being used to do things that I thought were very manipulative, and to advance an agenda that I thought was extremely hateful and dangerous.

And so I saw that I became concerned and I wrote a Medium piece in December of 2016, right after the election that was relatively critical of Facebook and Twitter and Google. But it wasn’t the whistleblowing kind of post. I didn’t talk about my experience per se.

And then I didn’t I didn’t really speak publicly about it again until the hearings happened, until the information came out about the Russian ads that were purchased and then the congressional hearings that happened on October 31st and November 1st. And seeing how those companies reacted and how they presented themselves and seeing how they used some of the same strategies, common strategies that I had seen and implemented myself back when I worked there in very different circumstances with no national security stuff on the line, with democracy not on the line; seeing that the approach we took was the same and was, frankly, misleading and totally aimed towards preserving their own self-interest and not towards protecting democracy, it just completely infuriated me. And so I sat down and I wrote a post and I sent it to The New York Times and they ran it.

So let’s unpack it a little bit. So during the election period what are you seeing? Bring me into what you’re seeing at the time that is concerning to you. In terms of whether it’s fake news or it’s misinformation, what are you piecing together at that time?

During the election, I wasn’t deeply focused on what was happening on Facebook and Twitter. But immediately after the election, I started reading about some of the things that had happened about how the Trump campaign and other people who may or may not have been associated with the Trump campaign were putting out content that was, I thought, very hateful. One of the things that I saw that happened definitely on Twitter and possibly also on Facebook was that there were posts that were intended to mislead people into not voting.

Facebook And Privacy

What was the realization that you’re kind of coming to about the company’s role in all of this?

One of the things that bothered me when I worked at Facebook was the extent to which they did not prioritize proactively protecting the users of the product. They built everything in a way that they were optimizing for getting as many users as they can and selling as many ads as they can because those are the only two things that the stock market really cares about, the only things they are being measured on. And so they never thought proactively about: Well, what about this kind of bad thing that might happen? How do we build some protection for people? You know, how do we ensure that people have a consistently good experience?

It was just about: How do we get more people to use it more often so we can sell more ads? And they never really thought around the corner of: Well, if we built the system that has all these vulnerabilities, some bad people are going to take advantage of them. And so they would just wait and then something bad happened [and the] company would react. And you know, the reaction was often: How do we make this fire go away? How do we make the press stop talking about it? How do we make regulators stop talking about it? And then we’re just going to continue doing what we’re doing.

And so that bothered me quite a bit when I worked there. But the extent to which that threatened society was not clear in 2011 or 2012. It just seemed like, you know, privacy. Yeah, that’s a pretty abstract issue that privacy is important and it’s frustrating this company doesn’t take it seriously enough. But at that point, it wasn’t clear that these same tools could be used to undermine the foundation of our country.

…And this was an approach that you were familiar with in terms of Facebook’s approach to this bad news.

Yes, their approach to bad news, as far as I have seen, has always been try to make the bad news go away by whatever means you can and then continue doing what you’re doing.

What about taking responsibility?

I don’t think they really do take responsibility. All the internet companies effectively hide behind what’s known as Rule 230 [Section 230] of the Communications Decency Act of 1996, which prevents them from having liability for content that is posted to their sites. And that rule was designed to enable the very nascent internet industry to flourish and it did. It was very effective at that. But in 1996, no one thought that Russian spies were going to be using a social media platform to drive Americans into the streets to fight with each other and to throw an election.

I mean, that was just beyond anyone’s comprehension. And when you’re talking about some other things that happened even outside the U.S., like allegations of Facebook being used in genocide and things like that, I mean, these are far beyond the pale of what was imagined when that law was written in 1996. And so because they have no legal liability, because they can hide behind this law, they don’t take responsibility. And that has to change.

…Let’s go back to the early days when you were there. So what was your role at Facebook and when did you come on?

Yeah. So I joined in mid-2011, about a year before the company went public. I was an operations manager and I was working on … It’s still relatively early days of the company in that they had started relatively mid-level people like me doing fairly important things. I ran that ad network which were ads inside Facebook apps. If you remember apps like Farmville, they were really big at the time when Facebook was still mostly used on desktop computers. I was responsible for both the ads inside the apps and also privacy issues and policy issues with the developers who built the apps. So that was what I focused on for a year and a half that I worked at Facebook.

How old were you when you started working at Facebook?

I was 32. So I had quite a career before I worked in tech as a musician. Worked in L.A. as a recording engineer and I career switched. I went back to graduate school and then my first real tech job out of graduate school was at Facebook.

And so looking back, how qualified were you for the job that you were given?

Not very. I ended up in an interesting situation because I had been the main person who was working on privacy issues with respect to Facebook Platform which had many, many, many privacy issues. It was a real hornet’s nest of problems because they were giving access to all this Facebook data to developers with very few controls. And because I had been one of the only people who was really focused on this issue, we ended up in a situation a few weeks before the IPO where the press had been calling out these issues over and over again. They’d been pointing out the ways in which Facebook has not been meeting its obligations.

And I ended up in a meeting with a bunch of the most senior executives of the company. And they sort of went around the room and they basically said, “Well, you know, who’s in charge of fixing this huge problem which has been called out in the press as one of the two biggest problems for the company going into the biggest tech IPO in history?”

And we’re talking about privacy problems.

Privacy problems and privacy was one of the two biggest problems. But Facebook Platform was probably the worst vector for privacy issues. So this was a big component of the number two issue that the company had going into this IPO. And they went around the room and they said, “Well, who’s in charge?” And the answer was me because no one else really knew anything about it.

In retrospect [it] was crazy because, you know, this is nine months into my first job in tech. And you’d think that a company of the size and importance of Facebook would have really focused and had a team of people, and very senior people, working on these issues. But due to, I’d say, organizational chaos and a lack of prioritization of privacy, it ended up being me.

What did you think about that at the time?

I was horrified. I didn’t think I was qualified.

Did you say that to anybody?

I did. Yeah, I did. And the response I got back was basically, “Don’t you think this is important?” It was essentially: people above me didn’t want to be on the hook for this.

On the hook for what?

They didn’t want the internal political problems that were associated with privacy. They were not concerned about solving the problems. They were concerned about protecting their reputations inside the company.

The Facebook IPO

And why is the timing of that important in terms of being before the IPO? Why is the timing of what you’re describing important?

It’s important because it was a period in which the company was under [a] tremendous spotlight and obviously, the company is under a huge spotlight right now. But going public was a huge step and there was a sense inside the company that basically, everything had to go perfectly, that the IPO had to be a success. It’s a company with a culture that is very focused on appearances. It’s very much a company where people are very focused on managing up and looking good. And so it was very important that things look good for the IPO.

And you had other concerns at the time about the ways in which Facebook could be used.

Yeah.

Bring me into the scene. What was the scene in which you were kind of expressing or developing a list of concerns?

Yeah. I had a bunch of concerns. One of the concerns is about children. So Facebook at the time had been working on products for children. They were not released until years later but they were thinking about how to build products for children. And I had the same concern for kids that I had for adults, which is they were very focused on the number of users they could get. And they were not focused, I felt, appropriately on protecting users. And for me seeing that same approach applied to children was really scary. So that was one concern.

…Who did you bring these concerns to?

I raised a number of concerns to the operations leadership. In retrospect, there’s part of me that wishes I had stamped my feet and run around with my head on fire more than I did. I certainly ensured that people who should have taken action knew about all these issues and I was surprised that there didn’t appear to be much interest in taking actions.

So what was the response generally to your concerns [from] the people that were higher up the chain?

It ranged from …On the privacy side, people appreciated that I was raising these concerns, but then I didn’t see much action to make significant changes. In other cases, people just didn’t want to hear it. After a very important meeting talking about a very important issue, I talked to someone who was more senior than me in my organization and he said, “I didn’t hear what you just said. I didn’t hear that part in the beginning.” Basically, like I have no interest in helping you.

And what was the specific concern you’d raised in that meeting?

The specific concern I raised in that meeting was basically I [was] a relatively unqualified person that had just been put in charge of a very, very large responsibility for fixing things that are very broken. And I am neither fully qualified nor able to fix these problems myself.

And the high…

His response was, “I didn’t hear that part from you,” which is odd because I got confirmed by email with an even more senior person that, in fact, I was honored for fixing some of the problem.

Was there ever a response? It sounds like there was a major priority always put on growth. Was the response that you received ever “That’s not our priority. Our priority is growth.” I’m just kind of curious [about] the interplay between: OK, if they’re not prioritizing the concerns you had, what were they prioritizing?

The best way to understand how technology companies prioritize is where they allocate engineers. That’s how you can tell. And I struggled to get engineers to even fix broken things that I was responsible for operating that were related to compliance and protection. Meanwhile, the Growth team had a huge number of engineers. So while no one said to me, “We don’t prioritize being in compliance, we do prioritize growth,” they simply allocate the engineers to growth and not to compliance.

And what was the Growth team’s job?

The Growth team’s job was to get as many new Facebook users as they possibly could. That was their job. And so they were working on optimizing the experience such that when you use Facebook the first time it stuck and you would use it more and more and more, or they were optimizing the various funnels – the ways that people actually enter into using Facebook for the first time. I mean, their metric is very simple: It’s how many people use Facebook.

Engagement And Facebook

It’s been described to us that there was this North Star metric at Facebook which was engagement. What does that mean?

Well, you could think about it in a few different ways. One way that the company thinks about is daily active users, which are the number of people who use Facebook in a given day, and monthly active users, which are the number of people who use it in any given month. And one of the things the company looks at is: Of the people who use it every month, how many use it every day? So basically, as a ratio, how many people are addicted such that they use it every single day?

So you can think of engagement in one sense that way. Another way you think about it is: What are people engaging with? What are they clicking on? What are they Liking? Which features are they actually utilizing and using to connect and share and use all those features? And depending on what you’re trying to solve for, you would look at different metrics. So if you’re looking at how well is the company doing towards growing its user base, obviously, you look at daily active users. If you’re looking at what kind of content is most effective for an advertiser, you’d probably want to see how much that content gets shared, how many people click on links, how many people Like it, etc.

And was quality of content a part of the calculus of the company?

Yes, we definitely talk a lot about quality on my team with respect to apps. But it was not prioritized in the same way as user growth. That was the key metric.

What is the News Feed and and why is it so powerful?

Yeah. So the goal of the News Feed is to provide you, the user, with the content on Facebook that you most want to see. It is no longer just chronological. It is optimized for the things that you are most likely to want to engage with. And the way it is built is, it is designed to show you some mix of content from your friends, from pages that you have Liked, and from advertisers.

And so the algorithm, the set of instructions that basically tells a News Feed what order to put those cards in, is optimizing for a number of things including the likelihood that you will actually engage with something and also how much money they can make by showing you ads. And one of the key things to keep in mind about News Feed is that it is designed to make you want to scroll down and continue to engage. Facebook’s goal is not to get you to come to Facebook, look for 30 seconds, get your information and leave. Their goal is to get you to stay on the site as long as you possibly can. And so the goal of News Feed is to do that, to suck you in and to get you [to] keep scrolling, keep looking, Like some posts, click on some links and see some ads.

So it’s not optimizing for quality necessarily. It’s optimizing…

It’s optimizing for time spent. At the end of the day, that is how they are going to make more money in their current business model.

Do we have any sense of how the computer algorithm determines what it is that you want to see?

It is incredibly complicated. The systems behind these feeds, whether it’s Facebook or Twitter or YouTube’s recommendation engine, there are only a handful of people who really know how the systems work. So while we can guess at and we understand some aspects of it, the entirety of the system is not easily knowable.

The Downside Of The Algorithm

And does that concern you?

Oh, yeah. One of the things I’ve been thinking recently is we have this idea in science fiction that at some point the robots are going to come and they’re going to enslave humanity. And I don’t think that’s actually in the future. I mean, if you think about Mark Zuckerberg and [Twitter co-founder] Jack Dorsey, if you were to parachute in a new person into the role that either of those people have and you said, “OK, you’ve got this product that you’re in charge of and it is tearing apart the fabric of society in a variety of ways and it’s literally being used to send people into the streets to fight with each other. What do you do?” The rational person would say, “Shut it down. Let’s turn [it] off and let’s figure it out.”

But yet they’re not doing that. In fact, they’re barely making any changes at all because they are under the control of the artificial intelligence that is News Feed. And that’s kind of an extreme way of describing this. But when [Tesla co-founder] Elon Musk says that he’s worried about, deeply worried about the existential risk that AI poses, that’s the kind of thing that he’s talking about.

Help me understand that. I mean, why is it that they’re in some way slaves to the algorithm? What does that mean exactly?

Well, so they are their slaves. [What] they really are slaves to is the business model. And the business model is built on the idea that people use this product over and over and over again for a very long period of time. And the way that they get people to do that is they built addictive News Feeds that suck you in and make you want to keep scrolling, keep looking, keep Liking, keep clicking. And so these systems have been built in such a way that they’re hard to control and optimize.

I would argue that we humans are now out of control. We’ve built a system that we don’t fully understand. And that system is addicting humans so it is effectively controlling us. And because these companies are built around a business model where they want to capture as much of your time as possible, it is effectively impossible for [users] to just say, “No, no, no, we’re just going to turn that off and do something totally different” because then they would lose so much money that companies would potentially collapse.

…And in terms of personal experience did you ever run up against that? I mean in terms of you know that issue of de-prioritizing risk?

Yes. I mean one of the things that I saw over and over again is that they would allocate engineers to work on growing the company. The growth team had tons of engineers and people were constantly figuring out how you could make the new user experience more engaging, how you could figure out how to get more people to sign up like they had all of these programs and products that were designed to get more users, get more usage.

But when I was trying to get people to fix parts of compliance aspects of the servers that were broken, it was very difficult for me to get engineers to do that because everyone was focused on growth, growth, growth.

Facebook’s Reaction To Warnings

…Let’s put this in a broader context. Why were you even coming up with these kinds of concerns?

Yeah. So in working on all these issues around Facebook games, I became more and more concerned about the broader data infrastructure of Facebook and the amount of data that Facebook had about its users and the vulnerabilities that the system had – the various ways that data could be extracted, the ways that data could be used against people. And so I started thinking through what are the worst case scenarios of what people could do with this data.

And one of the worst-case scenarios I thought of was: What if the Syrian government – this is back during the Arab Spring – got access to Facebook data or somehow convinced people in Syria who were activists to engage with pages or use Facebook games or apps in a way that enabled [the government] to understand who were opposed to them and who wasn’t, and then targeted the people who were opposed to them?

And it was a fairly involved situation that I made up. I had no evidence this was happening, but I used it as a way of thinking through: How could this system be used for bad? And so I drew up a very, very detailed map of the data vulnerabilities, of Facebook platforms specifically, but it sort of showed the data vulnerabilities of Facebook in general because [the] platform allowed access to a great amount of Facebook’s data.

And I showed all the areas where we had some sort of protective measures in place. Not all the protection measures were sufficient but they were something. I showed areas that were fully exposed and I showed some of the kinds of bad actors that might try to attack through these various factors. And I shared this document and it ended up being a PowerPoint deck. And I shared that with a number of people, both people in privacy and some senior executives.

And the response was muted, I would say. The privacy folks were interested and supportive. I did not get much if any fallout from the executives. I got the sense that this just wasn’t their priority. They weren’t that concerned about the vulnerabilities that the company was creating. They were concerned about revenue growth and user growth.

And that was expressed to you or that’s something that you just gleaned from the interactions?

From the lack of a response.

And how senior were the senior executives?

Very. Like among the top five executives in the company.

So essentially, you were raising concerns about the vulnerability of Facebook to be used for bad purposes as opposed to good.

Yes.

Were there other people like you that were raising these types of concerns that early?

I’m not aware of people who were raising concerns, who were putting together documents outlining these kind of vulnerabilities. I know that other people at the company shared these concerns. But I’m not aware of anyone who put these kinds of thoughts into an organized fashion and shared them with executives and said, “Hey, we have a problem.” I know that a number of people have raised concerns, but I think at this point it should be relatively obvious to the whole world that there are problems.

When you say the data vulnerabilities, I’m just trying to think in the most layman terms possible.

Yeah. So the document that I created outlining some of these vulnerabilities really showed in many different ways that different kinds of bad actors could attack Facebook users through different means. So for example, as a malicious person, you could go and look at every single person’s public Facebook page and you could gather information about them. That’s relatively unsophisticated.

You could set up some kind of a malicious game that appears to be one thing, but in fact, it’s only designed to harvest information from certain kinds of people. Then you can use that information against them. You can get their email address and at that point you get their list [of] friends, their photos, their messages even. So you could actually get a tremendous amount of information about people.

And if you’re malicious you could do some pretty bad things with that information. So the purpose of the document was: Let’s think through all the different ways that some of these kinds of bad actors, whether they are foreign governments or whether they’re data brokers who are just trying to sell people’s data, can get this information and what they might do with it, and how we, the company, can protect ourselves.

That was really helpful. Why did you leave Facebook?

I didn’t agree with them about some of these issues and I became increasingly upset about some of the issues. And so I went to work for a startup that was doing social advertising. So I actually stayed in the same space. But I just didn’t agree with the values of the company.

Did you make it known when you left that you left because you didn’t agree with the company’s values?

No, I didn’t. And in retrospect, there’s part of me that wishes that I had, that I had fought more. But I don’t know that that would have done anything.

The Arab Spring

The Arab Spring. You were there during the Arab Spring?

The end of it. Yeah.

I mean, internally inside the company. We all externally saw the Arab Spring of this incredibly hopeful moment. But how did the Arab Spring resonate and the idea of social media revolution? How did that resonate inside of Facebook?

Well, they bragged about it. I mean , they were relatively restrained externally about taking credit for it, but internally they were, I would say, very happy to take credit for the positive aspect of the Arab Spring and the idea that social media is being used to effect democratic change.

And there wasn’t a lot of thought given to the dark side [that] the same kind of powers that had enabled activists for democracy could also enable foreign spies and various malicious activity.

So what was your feeling?

Well, that was right when I joined. So at that point I thought it was great. I was like, yeah, I’m part of this company that is helping change the world for good. And I felt incredibly excited about that. And then as I learned more and more and more, I became more and more concerned.

Facebook And Filter Bubbles

Around the same time that you joined, Eli Pariser wrote “The Filter Bubble.” Right? Were you aware at the time of the kind of the concerns about the filter bubble?

Not deeply. I remember reading his post at one point, but I think that didn’t really hit home for me until the 2016 election. And I didn’t really comprehend the extent of the damage that’s done by creating a personalized view of the world. At the time, it seemed like the shiny new vision for the future. You’d get whatever you wanted. You, the customer, would get all of the kinds of things you wanted. Behind the scenes, there’d be all this work being done to understand you and provide you with everything that you wanted, and it was this sort of magical, personalized experience. No one I talked to at the company at the time saw that this was a risk.

The risk being what?

The risk being that if you provide people with the personalized experience, they can get a view of the world that can be quite different from reality and very different from the person standing next to them. And that can fracture our society.

Outside critique of Facebook like “The Filter Bubble” book in 2011 – did those things land or permeate inside the company? I mean, the idea that there were people that were studying or trying to study what it was that you guys were creating inside of there? Did people take into account the outside critique?

People at the company definitely read things that were being written about the company very voraciously. But there was also a sense that this is the company that is changing the world. You know, we are the creators of this brave new world, and while we will read what other people have to write, we are the ones making the decisions. And I think there is a certain arrogance there that led to a lot of bad long-term decision-making.

I think the short-term decision-making was very well executed. But the long-term ramifications of those decisions [were] not well thought through at all. And it’s got us to where we are right now.

The Facebook Culture

There was an emphasis also on the youth culture inside Facebook. You actually were probably a little bit older than some of the people that were working there. But was there a sense that people had a sense of history [of] this company, this ethos of “Move Fast and Break Things” and disruption? Give me a sense of what that culture was like on the inside.

Yeah. There was there was very little sense of history. I would argue there there’s really no sense of history. It was more of a sense that we are building the future and there was a real focus on youth being a good thing. I remember one of the people I managed was very young and my manager was very excited about how young this person was. This person had a lot of talents, but I thought it was strange there was such an emphasis on the importance of young, smart people instead of just the importance of smart people.

It was not a particularly diverse workforce. It was very much the sort of Harvard, Stanford, Ivy League group of people who were largely in their 20s, and then some older executives. And I think that kind of myopia led to the fact that you’ve got the same kinds of people telling themselves that they are the masters of the universe, and designing the future leads to some bad decision-making long term because people don’t have the history.

They haven’t had the experience to say this thing that happened before went well and everything that happened before went badly. Let’s study both and understand how this might go badly – what we’re doing right now. There was really no sense of that. No one understood that things might go badly.

Ads In The News Feed

At one point Facebook starts to insert ads into News Feed.

Yeah.

Can you tell me about that and tell me why that’s significant?

So what happened right after the IPO is the company was put under a tremendous amount of pressure from the financial press about its vision for mobile and the fact that the company had been built as basically a desktop web product. But in fact, users were rapidly moving away from desktop and to mobile devices. And the company at the time [of] the IPO had a relatively weak mobile strategy and so they had to pivot very quickly.

And one of the things that they had to do was they had to figure out how you got ads onto a mobile phone. And it’s a totally different form factor. On a desktop computer, you have all this extra space. You had these right-hand side ads that could fill up a certain part of the page, and on a mobile device that part of the page did not exist. So they had to do something. And so the decision that was made  –  it was a very smart decision  –  was to figure out how to insert ads into the News Feed that looked and functioned very similar to the post that you would see from your friends.

And that did two things. One is it supercharged Facebook’s revenue because suddenly they could access all this inventory on mobile. And the second thing is it made those ads incredibly compelling because you couldn’t ignore the ads the way you did when [they] are on the right-hand side. They were right there in the middle of the feed. And one thing that you came to the site to look at was content that was pretty similar-looking to your friend’s baby photos. And then you just scroll past that and there’s something that is, in fact, an ad that is in the same form. And so that really enabled advertisers to have that much more impact.

…What were your concerns about privacy?

My concerns at that time were that I knew that there were all these malicious actors who would do a wide range of bad things given the opportunity, given the ability to target people based on this information that Facebook had. When I worked at the company I had a certain set of ideas of what those bad things might be. And having seen what’s happened in this country in the last couple of years I now have an entirely new and much bigger set of concerns about what those bad things might be. …

But bring it back to privacy, because I think that that’s one of the links that’s so hard to draw on, was always hard to draw. I mean, were your early concerns about privacy also concerns about making yourself vulnerable to manipulation?

Yes.

So explain that to me.

Yes. So what concerned me when I worked there was that there were insufficient controls around the data that Facebook had so that people could use it in malicious ways; and also that, given enough of this data, you can really push people to do things, that you can manipulate them in ways that you would not be able to do without that data. So there have been various allegations of micro-targeting during the last presidential election where very small groups of people, in some cases targeted potentially by race or other categories, were sent messages that were specifically designed to discourage them from voting. And that kind of targeting very, very small groups of people with very specific messaging is totally new. That could not have happened in the era of TV-based presidential advertising. You could never have taken a “hey, don’t vote” essentially kind of message, put [it] on national TV and gotten away with it. Right? So that is totally new and very, very dangerous.

And if you look at the future, if you think about the kinds of technologies that are coming, there’s something called “deepfakes,” which are the ability to regenerate or generate video or audio that looks like you, that sounds like you, but that is completely made up. When you take that kind of capability and you you add to that the ability to target very specific people, you can do some really damaging things. And that really scares me.

Silicon Valley Culture

I want to just get back to one thing, which is kind of the culture of secrecy in Silicon Valley. And also one thing that you wrote, I think it was [in] your Medium piece, where you basically described your feelings as you were watching the election. And I think the words that you used were that it was painful and personal. Right? Those were the two words that you used. I mean, before we get to the secrecy thing, can you just describe for me, having been a part of building a company early on, what it was like for you emotionally to kind of watch this tool become what it had become?

It was very, very painful. It was very painful seeing some content that was put out on Facebook and on Twitter and other internet services that was just hateful and racist, xenophobic. And when I worked there, I felt that I was building something that was helping the world get to a more democratic place; that the company talked about the Arab Spring and was proud of the Arab Spring and all of the good democratic outcomes that came out of social media. And that’s something that I was proud to have been a part of. And then to see those very same tools being used to send messages that were so angry and divisive and that were counter to all the ideals that I thought those companies stood for and that I stood for, was extremely personal.

Is that a fixable problem?

I think it’s fixable over time, but it’s only fixable if you reimagine the business model of the companies, if you reimagine the architecture, everything. You can’t put a Band-Aid over that and say, “Oh, well, we’re just going to find all the conspiracy theories and reduce their distribution.” I mean, that would be very hard. And then there will be some other issue that you wouldn’t cover.

originally posted on pbs.org