The Facebook Dilemma | Interview Of Dipayan Ghosh: Former Facebook Privacy And Public Policy Advisor

The Facebook Dilemma | Interview Of Dipayan Ghosh: Former Facebook Privacy And Public Policy Advisor
The Facebook Dilemma | Interview Of Dipayan Ghosh: Former Facebook Privacy And Public Policy Advisor

Dipayan Ghosh was a privacy and public policy advisor at Facebook from 2015-2017. He is now a fellow at the Harvard Kennedy School.

This is the transcript of an interview with FRONTLINE’s James Jacoby conducted on July 19, 2018. It has been edited in parts for clarity and length.

Snowden had just happened, right, and if we situate ourselves back then, what was it that you were doing, and what was it that you were called up to do at that point in time?

Well, at the time that the Snowden disclosures happened, I happened to be doing my postdoc, and I was studying information theory, the study of how information gets sent from point A to point B in a secure and private manner. At the time of the disclosures, there was this tremendous attention in D.C. and on the Obama administration as to what needs to be done and the way forward to make sure that this kind of thing never happens again; that is, the tremendous breach of privacy and trust. It was a great opportunity to try to help the government try to work its way forward from everything that had happened and develop public policy in a place to try to treat this major issue of privacy.

Who gave you the call, and who was running the whole thing, and what was really the mission statement of what you ended up doing?

Well, it’s hard for me to go into who made the call and stuff like that.

OK. What had the White House set up at that point in time that then you were a part of?

Well, President Obama had determined that we needed to do something about this whole set of disclosures; that is, to develop public policy that can respond to them. So he set up a comprehensive review, which was led by John Podesta, who eventually led the Hillary campaign, to try to understand the big data ecosystem and the ways that it was changing society, and the negative and positive impacts of the industry’s activity in that area. And it was our job as part of that review team to try to assess all of that and develop public policy and the way forward to treat the problem of individual privacy.

And at the time, what privacy laws existed in the United States of America?

Well, privacy in the U.S. traditionally has been sectorally regulated. There are a bunch of different laws that treat different sectors of specific sensitive data, like health data or educational data or kids’ data. But there’s no law on the books, a baseline privacy law, that really treats any kind of data. Let’s say the American voters’ data, or just the American consumers’ data. There’s no law on the books. In fact, there’s no right to consumer privacy.

That was one of the things that we were trying to treat as part of this review process; that is, to develop a way forward for a comprehensive baseline privacy law at the federal level.

The Obama administration, had they already – what had Obama already done or said or promised when it came to baseline privacy at that point?

Well, in 2012, the administration released a major report called the Consumer Privacy Bill of Rights. In the middle of that, meaning the following year, the Snowden disclosures happened, and in the immediate aftermath, the administration, including the president, determined that we needed to operationalize the recommendations in the Consumer Privacy Bill of Rights report from 2012 into a federal law. That led to the development of the Consumer Privacy Bill of Rights Act, which the administration released in 2015 and sent to Congress. That of course was dead on arrival in Congress.

We’ll get to that. But in terms of when you come into the administration, how would you describe the appetite to do something about either regulating the collection of data? What was the appetite at the time?

Well, I think, When I came in, the Snowden disclosures had just happened, and the administration was very up on the idea of developing a baseline privacy law that could be worked on with Congress to try to rein in some of the injustices we’d seen in the big data space.

Snowden’s one thing, and that’s about government surveillance. But what existed at the time? …What about the appetite to go after the industry and its collection of data? Set me in time about what’s really happening in that, as you put it, that ecosystem and how important data was becoming at that point.

Well, that ecosystem consists of most significantly a few behemoths in Silicon Valley, and I think it would be fair to say that the regulatory regime that operates over that industry’s set of practices even today is pretty slim. It doesn’t have much of a stick that it can raise. The only set of regulations that they really have to comply with in regard to their main moneymaker, which is digital advertising, is the Federal Trade Commission’s Section 5 authority, which is really not a good way of policing the system in a way that can protect the consumer in a proactive way.

You could say that it was kind of a regulatory Wild West. There’s a lot of public scrutiny now and a lot of attention on this industry and a lot of impetus to try to regulate it from certain parts of Congress and the rest of the American population. But even now I think it’s going to be very challenging to see movement to pass a privacy law.

Again, back in the past. …We’ll get to now in terms of broad-strokes stuff, but let’s stay back in the past for a second. This was a new major economic engine for the United States, right, at the time. Silicon Valley, the power of data. Help me understand again what that was like at the time, especially as you’re coming into the White House and trying to come up with ideas to put some guidelines on them.

Big data was in, and people had been talking about the power, the economic strength, offered by big data for several years. In fact, by the time that I got to the White House, big data was such an important aspect and piece of the American economy and global economy that several companies were really built on the practice of collecting as much data as possible, and operationalizing it through whether it’s digital advertising or insurance or whatever it might be to advise their business decisions. Big data was in.

And why was that of concern?

Well, I don’t think that the practice of big data – that is, collecting a lot of data and processing it for developing inferences – necessarily was a bad thing. Of course, if companies do it the wrong way, it can lead to mistrust, and it can lead to potential security breaches. But to just say that a consumer’s privacy has been breached, that wasn’t really a thing that people were thinking about or anybody was really thinking about at that time, in the sense that – this is why it was so hard to push privacy legislation forward at the time.

I think this industry grew up in a space that was not very regulated. Companies saw a tremendous opportunity to develop inferences using data and through processing it and collecting it, and didn’t really think about what kinds of harms could really come to people from all of those practices.

On the task force that you were on, were you thinking through those harms, or were you trying to – what was the mission of what you were trying to do?

Our mission was to develop public policy thinking around commercial privacy; that is, to really develop a way forward for the American people that can treat this problem of big data breaching individual privacy and resolve it once and for all, potentially through federal legislation.

Facebook And Privacy

And was Facebook on the radar in terms of one of these companies?

Absolutely. I think Facebook is central to this small set of Silicon Valley behemoths that deserves some scrutiny for the way that it uses data and the implications that it has for public policy and individual privacy and security.

How much was known by you, and how transparent was Facebook with the government at the time about its data collection practices?

Well, I think that Facebook was no different from any other player in the industry. In fact, Facebook came to the table just in the same way that many other companies came to the table; that is, to describe its practices and to really shed some light on what it does and how its business model works. So I’d say that it was similar to the other set of companies that we were talking to at the time.

What was it that they said? What were these companies telling you about their practices at the time?

I think that the whole industry, including Facebook, saw data as really integral to its core business practice. Part of the business model for all of these companies is to collect as much data as possible on the individual and develop inferences about the individual from that data so that it can target ads, essentially, and curate content in the News Feed or in the Twitter feed or in the YouTube recommendations or whatever it might be. All of those inferences and all of those algorithms run off of behavioral data.

What was the concern, if any, at the time in the White House about their collection of that data and whatever inferences they might make about us?

Well, I think our concern was really that this whole set of practices is completely in contradiction to individual privacy, and if we care about individual privacy to the extent that we care about the individual’s autonomy and protection from the industry and protection from the government and everything that comes with that, then we really need to do something about this set of practices and assure that the industry takes the right steps in protecting that data and collecting that data and managing it.

Were the big tech companies like Facebook, do you think, forthcoming with you about what their practices were at the time?

Yeah, I think they were as forthcoming as possible. Our interactions with any particular given company, any given company, were not that significant. We spoke with a lot of different companies and from many different kinds of industries, so the touch points with any particular company, whether Google or Facebook or Bank of America or whomever, were not that many.

What was the stance of the industry at the time in terms of the White House wanting to make some sort of baseline privacy?

It depends on which industry we’re talking about, but if we are talking about the major Internet companies, I think that it’s fair to say that the Internet, the major Internet companies, Google, Facebook, Twitter, they run off of data. That is integral to their business model, and any regulation of the collection of data or use of data can directly threaten their business model. So I think their reaction to this regulatory interest was very reticent.

How did that manifest itself? How did you see that yourself in terms of their reticence or their opposition to any effort to regulate data collection?

I didn’t see it explicitly in any way.

Well, how do you know of the reticence, then?

Private conversations, mostly. Yeah, it’s hard for me to go into it.

Regulating Big Data

…Let me ask this. Were the big Internet companies mobilizing in any way to fight against any effort to regulate them and to regulate data collection?

Well, I think that in this particular case of the Obama administration pushing forward privacy regulation or legislation, calling for it, the industry didn’t really need to mobilize in any way, because upon the release of this legislation, it was pretty clear that it was not going to move forward in Congress in any way. So I don’t think they had to mobilize in the way that they had to mobilize for immigration or SOPA/PIPA [Stop Online Privacy Act/Protect IP Act] or anything like that.

Why wasn’t it going to move forward in Congress?

A whole variety of reasons. Just for illustration, when President Obama announced the legislation and sent it over to the Hill, it ultimately got no sponsorship from anyone in Congress, and I think that that was a combination of all sorts of reasons: industry lobbying; industry disagreement among different industries. Even the activists thought that the legislation could have been improved in different ways. So basically stakeholders from across the ecosystem were pushing against the passage of that legislation.

How hard did the Obama administration fight for it to actually succeed?

Well, this was announced by the president at the Stanford Cybersecurity Summit in 2015 with lots of style and salon. I think that the president’s – one of his closest advisers, John Podesta, it was John Podesta’s job to see that the White House could make progress on this area. We had invested a lot of resource and time into pushing it forward, so I think there was every commitment possible from the administration itself, which included officials from the Commerce Department and an interagency group that was contributing ideas to all of this. In terms of the political will that the president put behind this, I think that was tremendous in terms of his rhetoric, in terms of the time that he took to understand the issue. From my perspective, the actions and the thinking of all my colleagues were well intentioned and intellectually honest. I think that it was just a difficult time to push privacy legislation for all sorts of reasons.

Did the industry in part kill it?

I don’t know. I don’t know what the thinking behind a member’s decision to co-sponsor it or to introduce it is. I imagine that it had something to do with industry lobbying, industry disagreements, and other interested lobbying, including from public advocates.

Facebook As A Surveillance Company

Was there a rift between the White House and Silicon Valley after Snowden?

Absolutely. I think you could say that. The Snowden disclosures set a tremendous barrier between the government and Silicon Valley. Silicon Valley did not appreciate the associations that the press were drawing between its practices, its data collection, its, you could say, surveillance operation, with all of those of the administration and of the executive branch of the government. I think that the industry tried to distance itself from the government at that moment because this whole set of implications that Edward Snowden was drawing with fact, was a threat to these companies overseas, especially.

You brought up the word “surveillance.” Were companies like Facebook and others that depended on data collection, was it seen within the White House or your team as surveillance to some degree?

Well, let’s define “surveillance” for a second. Surveillance in my mind is the unchecked and uninhibited collection of data and the drawing of inferences from that data for the purposes of understanding the individual. That’s core to the business model of Facebook, of Google, of any Internet company that’s at the center of this industry. I think you could say that part and parcel of their business model is the practice of surveillance.

Was that the main concern of the White House at the time?

I think that the Obama administration was particularly concerned about all of these practices that lead to corporate surveillance, that lead to the idea of a single company maintaining a tremendously powerful behavioral profile on the individual, which can threaten that individual’s autonomy from the industry. That was one of the things that we were particularly concerned about, and what we wanted to try to do with all these public policy efforts was to try to maintain a commitment to the individual’s autonomy, to his or her agency, in the face of the government, or in the face of the industry, which does have every incentive to collect as much data as possible on the individual.

What skill set or knowledge base or background did you bring to the equation there? Why were you brought on here?

Well, my area of study in graduate school and during my postdoc, my area of study is computer science and electrical engineering, and what I was particularly interested in in my area of research is the area of information theory, the practice of sending information from point A to point B in a secure and private manner. …What I tried to do was bring some technical expertise to the table as we were dealing with the industry or dealing with other parts of different industries or our colleagues across the government. I was surrounded by brilliant people who were almost exclusively policy experts or lawyers and legal scholars, and it was my job to try to bring a slightly different perspective to the table.

Do you think that the American public at that point in time had a good sense as to what these Internet companies or surveillance companies were basically up to?

Absolutely not. I don’t think that the average individual consumer has even now really any idea how Silicon Valley and the major Internet companies work.

And the danger of that is what?

The danger of that is in the case that the business practices of this industry threaten the individual, threaten their power to act, their power to decide, their power to vote, their pricing in the market, in the case that these businesses threaten all of those things and even, you could say, take away some aspect of rights of the individual, and the individual doesn’t know any of that and just is not aware that these business practices are threatening him or her, we have a serious problem, because what that means is that we don’t have a way of building public sentiment to drive a public policy that can treat that problem.

Were there any worst-case scenarios at the time that were being discussed inside the White House about ways in which this data collection or relinquishment of privacy could in any way play out?

That was I think what we were lacking. We were lacking a way of telling this story to the American people that can really drive public sentiment and cause legislation to move forward. It’s a shame that public policy that can protect people, that is generally positive, still needs the buildup of public sentiment to push it forward, to create the political will for members in Congress to vote for it. But that’s how our political system works.

Right. But at the time, maybe there was a problem with the way the White House communicated this. But what were you worried about? You’re someone who studies this stuff, who was thinking deeply about it. Were there worst-case scenarios that were discussed or thought about inside the White House, about ways in which this data collection and the end of privacy to some degree could play out?

I think what I was most worried about is the fact that this whole industry operated in a regulatory Wild West. There even today is not really a way for the American government to rein the industry in. When we think about other industries that have caused tremendous harm to consumers in the past when they weren’t really well regulated, whether that’s the telecommunications sector or cigarette companies or alcohol companies or whatever it might be, we have seen over and over again that the American people and the government have decided to rein them in in some way. And even though I can’t put my finger on exactly how those harms might be manifested because of this regulatory Wild West that the industry was subject to, I think we all knew that it was only a matter of time that harm would come to the American people, whether that’s because of political problems or foreign interference or whatever it might be.

Why did you know that? Why was it just a matter of time?

I think it’s an economic theory that an industry that doesn’t really have a regulatory regime that applies to it in a meaningful way will start to engage in practices that are harmful to the public.

The Consumer Privacy Bill Of Rights

…All right, if you could tell me the story of the – I think it was on a Friday that the bill was introduced. Are you able to bring me through that, what that final stretch was?

Sure, yeah. Well the president had announced that he had this interest in releasing a legislative proposal addressing baseline privacy a few weeks before it was eventually released. But I think during the week that we eventually released it, we could read the tea leaves and see that the public was not going to be very interested in this thing and that Congress was not going to respond to it very well and that it would be attacked by stakeholders across the ecosystem. That’s ultimately what led to its being released late on a Friday.

As intellectually honest and positive as the provisions of that bill may have been, it was a shame that we couldn’t muster the political sentiment among the American people and Congress to really help us drive it through.

Was Facebook onboard? Were they in favor of this piece of legislation?

I think Facebook sided with the rest of the Internet companies and was very reticent.

What’s the Internet Association?

The Internet Association is a trade association that represents the biggest companies in Silicon Valley.

How powerful are they?

The Internet Association – it’s hard for me to talk about this.

OK. Just how powerful they are?

Yeah. I think what it does is suggest that I know something special about them because of my work at Facebook.

What about if you situate yourself back in the White House? How powerful was the Internet Association?

I didn’t work with them while I was at the White House except on net neutrality. I didn’t work with them on privacy.

Did you know of what their efforts were, if any, to influence Congress on privacy…?

See, the thing that I haven’t mentioned, just because it hasn’t come up, is that I don’t think that it was industry lobbying that killed the Consumer Privacy Bill of Rights; I think it was a lack of public sentiment, a lack of public awareness. It was simpler. It was just a lot simpler. And yes, there were a bunch of advocates that weighed in against it. And I think over a longer period of time the industry has always pushed against privacy legislation, which has meant that it’s always been hard for efforts like that to kind of lift off. But I think in that particular case, it was really just a lack of getting everybody in line.

The Danger Of Information Asymmetry

…At one point you kept using the term “practices that can threaten the individual’s autonomy.” I don’t really know what you mean. Could you explain that?

In the course of engaging in corporate surveillance, the biggest companies in Silicon Valley are particularly interested in knowing as much about us as possible so that they can leverage that in, for example, the targeted advertising ecosystem, which is, for the most part, how the biggest companies make money. The reason why this can really lead to a problem for consumers is because of the information asymmetry that it causes. When one company or one entity knows everything about the individual, and the individual knows nothing about the company and its practices and how it’s using his or her data, that type of information asymmetry can be very harmful to the way that a democracy works.

In what way, though? Draw that out. How can that be harmful to a consumer or to a citizen if Facebook, for instance, knows so much about us?

When we’re interacting with an Internet company and they’re collecting tons and tons of information about us without our awareness or without our understanding, and there’s no regulatory regime set up by the American government to protect that data and to regulate the industry on how it uses that data, we run into situations like what we’ve seen with Cambridge Analytica or like what we’ve seen with the Chinese data breaches or the Russian foreign interference in our elections. These are all examples of ways that the American citizens’ data has been used in an unauthorized way by a nefarious actor to try to take advantage of the country and, by corollary, the individual.

Back in that time frame of – this is pre – all of this stuff. But I’d imagine you were aware of the literature and aware of what the thinking in Silicon Valley was about these things. What were the potential harms or threats at that point in time that people were actually thinking about? Or was it just too difficult to imagine it, and that’s why you lost?

I think that’s what it is. I think that’s what it is.

Yeah, so explain that to me.

Well, I think that privacy is a really difficult concept for the individual consumer to really get their head around. To keep things general for a second, all sorts of people – sorry, let me rephrase that part – everybody has a different understanding of what privacy is. Some people, especially many kids these days, use all kinds of apps and share information very freely and seamlessly across all sorts of platforms. Others are tremendously reticent to partake in the digital economy.

I think what that signals is that different people value their information and their privacy of their information very differently. And when we have that kind of a range of feelings about privacy, and underlying all of that we know that the breach of information is something theoretical, something far off, something that we don’t think about when we’re clicking through a consent form, all of that means that people don’t really think about privacy in real time. They only think about it in retrospect.

But at the time, were there any – It’s hard to imagine that you have some of the brightest minds put to the task of figuring out, OK, we’ve got this big data industry; what threats does it pose if we’re giving up our private data? How could that be used against us? Was there any thinking about what sort of threats there might have been at the time about how people’s data could be used against them?

I don’t think so. I don’t think so. For example – I’ve been asked this before: Did the thinking that Russia could access the information and use it for disinformation operations, did that ever come up? Nothing like that ever came up.

I think what you’re saying is manipulation; that consumers could be manipulated in various ways, but they couldn’t even know that.
Is that the autonomy thing that you’re talking about?

…When consumers think about privacy, they think about the protection of their data. They think in very simple terms. But what they don’t think about is everything else that can happen with their data that they just can’t see, whether that’s corporate use of their data, whether that’s the creation of filter bubbles, whether that’s the way that targeted advertising works by sectioning us off into all sorts of different parts of the American population. I think that all of that is really what public policymakers are trying to treat when they think about privacy; that is, to protect the individual from all those sorts of things that can happen with their data that they don’t see as opposed to just protect their data from security breach.

I think one of the greatest harms at play here is that this whole industry, whether we’re talking about YouTube or Google Search or Facebook and the News Feed, this whole industry operates on the collection of data on the individual to create behavioral tracking profiles on every single individual, and then to use that data to do two things with algorithms: route ads in an intelligent manner; and curate content with as much precision as possible.

What that leads to are economic opportunities for bad actors to push disinformation at people. It also leads to the filter bubble problem, where individuals are sectioned off like barn animals into different groups, which advertisers and other nefarious actors can leverage to try to drive disinformation or fake news or hate speech and create these artificial partitions in our society that become very real when we go to the voting booth or when we go to Charlottesville.

With the loss of privacy, does it not leave you more open and vulnerable to manipulation? And was that at least a part of the conversation back in 2015 or so, 2014, that if you knew about algorithms, you knew about the whole inference system? Well, if these companies are able to make inferences about us, isn’t that all about manipulating us with ads? Was that a part of the thinking at the time?

I think so. I think, at least from my perspective, what I was trying to understand is the business model that is at play here in the sector. The business model is very simple. It’s three-pronged. It is to create tremendously compelling services like YouTube or the Messenger app or whatever it might be, services that over time become very sticky and addictive, some people say; second, to collect as much data on the individual as possible through those services, to develop behavioral tracking profiles that lead to the third piece, which is to inform the curation of content and the targeting of ads at the individual. That’s the business model that we’re talking about.

I think there’s nothing necessarily wrong with that business model if it’s policed in the right way. If it’s managed in the right way, in a responsible way. The problem that we’ve seen over the past couple of years, which has come in to high resolution after the Cambridge Analytica privacy breach and other incidents that we’ve learned about, the real problem is that bad actors infiltrate the system, and they take advantage of exactly the same tools that legitimate advertisers like Chanel or the NBA might use to leverage the filter bubble and drive manipulative advertising and content at specific filter bubbles. That’s the problem that has set a fire to American democracy, I think.

…Again, back in time, around 2015 or so, did a company like Facebook that has engineers like yourself designing algorithms that are based on data collection about individuals, do you think they were thinking through issues of the potential for people to be manipulated on their platform by advertisers or even by others?

…I think through conversations with Facebook and Google and other like companies, what we learned is that they do think about some of the things that are coming up here, particularly privacy, maybe not all the implications that come out of the breach of privacy, but certainly thinking about a commitment to user privacy. I think, though, one of the central problems here is that their business model is contradictory to individual privacy. Their business model has within it an intent to collect data and understand things about the individual, and that’s exactly what we’re trying to protect against when we think about privacy legislation or protective measures for consumers.

So you knew at the time – and presumably these companies knew at the time – that just the sheer fact of data collection leaves a consumer or a citizen vulnerable in some way to some harm.

Absolutely.

Yet there was no impetus on their part – yet they wanted to continue on unabated by government interference?

Well, I think that’s the case with almost any industry, to try to avoid regulation that can threaten the business model.

Silicon Valley And Washington D.C.

…The relationship – after Snowden, the relationship between Silicon Valley and the White House and certainly the intelligence community freezes up, as you put it. What does that really mean? What did that set into motion?

Well, I think what that really did is set a wedge between the industry and the government at a pretty sensitive time, at a time when we were all thinking about how we move forward from the Snowden disclosures. That didn’t help, because what we were trying to do is understand industry practices and try to treat some of the harms that can come from them, and certainly the government as well. And that wedge was not helpful.

Was there any political pressure inside the White House or from the outside to go easy on the industry?

Not that I know of.

President Obama was quite close with Silicon Valley, and they were supportive of him politically, and there was a perception that he was quite cozy with the industry. Is that something that you could sense from being inside the White House?

…That perception exists as many journalists have tried to illustrate. I don’t think there’s any validity to it, because what President Obama clearly said is we need consumer privacy; we need to push this legislation; we need to protect the individual. And if there’s one public policy measure that the industry will fight, it is that. It is that because that’s something that can really set a fire to the business model that’s at play here.

So you’re saying it was actually a brave move by him to do this?

Oh, for sure. Yeah.

Working For Facebook

We’ll take this one step at a time, OK? After your time in the White House – first of all, why do you leave?

Well, an opportunity came up to join a company that I really admired, Facebook, and try to help it develop privacy policy in a way that really tapped my interest and my area of study and expertise, you could say. I saw it as a really nice opportunity.

What was your job at Facebook?

My role was to serve as the company’s U.S. privacy and public policy adviser.

And what did that mean?

I would say that my role was twofold: first, to help develop products in a way that was sensitive to consumer privacy issues, and then also to help develop public policy stances for the company.

And in your experience there, how committed was Facebook to consumer privacy?

Well, I think Facebook, just like most other Silicon Valley companies, was tremendously committed to user privacy in the sense that we had a pretty robust team with some of the leading experts in the area, and the company still of course has that team. I think that in terms of the staff, it’s really top of the line. I do think that the underlying business model for the company is a challenge when we try to think about privacy, so that tension exists all the time.

How does that play out, though, that tension? If you can kind of set a scenario of how that tension plays out.

It’s hard for me to talk about that.

Which I guess you’re basically saying is there’s a tension between the business model or the bottom line and consumer privacy that always exists here.

Yeah.

If you could characterize in some way, did the company generally err on the side of the bottom line or consumer privacy?

…It’s hard to say that the company had to err on one side or the other. Instead, in any particular decision, what it tried to do is develop a solution that’s protective of both its commercial interest and its consumers’ interest. That’s kind of how these decisions were funneled out.

…Is it sort of like being in charge of climate change policy at Exxon to be in charge of privacy thinking at Facebook?

I think so. I think it’s a fair comparison, because that team’s practice is a challenge to the company’s business model and to its commercial interest. And whenever you have that kind of tension, it’s important for companies to have teams like that to place a check on their commercial interests running astray. I think it’s a fair comparison.

Was it a difficult job to have?

Oh, of course. I think, first of all, any job at Facebook is difficult, especially in this day and age. It’s always difficult because you have to constantly be thinking about how do we protect the long-term capacity for innovation while also making sure that we protect each and every Facebook user.

Why do you care about privacy? Why do you care so much about – why is this your life’s work? What’s the deal?

The long and short answer of it is that I kind of fell into the privacy world in grad school.

But in your core, your emotional, spiritual, citizen core, is it something that’s really important to you?

Yeah, absolutely. I care a lot about individual privacy. I think that the fact that it’s a human right in Europe really resonates with me, and I think that we deserve that in America, too.

Did you feel comfortable inside of Facebook with your views on privacy, being inside of a company that may not respect it as much as you’d want them to?

Well, I saw it as a great opportunity to try to join a company that is at the forefront of thinking about data collection and privacy at the same time. I would say that the biggest decisions about privacy in the world get made at companies like Facebook and Google and the others that collect a lot of data and use it for their commercial practice, so I saw it as a great opportunity to learn about the industry and try to make some of those heavy-hitting decisions.

If you had a takeaway from your time there about the values of that company, what would it be?

That’s a hard question. It’s very abstract.

…If you’re someone who cares about privacy, did it align with your values?

It’s hard for me to answer that.

OK. Why is it hard for you to answer that?

Well, I’m trying to find the right words.

It’s not an intellectual question. It’s more of one that’s kind of – it’s one of a feeling. You work for a place; they have ethics, values, like human beings do, and I’m just wondering if you felt like it was a comfortable place for someone like yourself who does truly care about this issue.

I think that the workplace was comfortable. It’s just that there was so much happening, and I wanted to be more open in my thinking about it and have a bigger impact than be a company person.

And what does that mean, really? What’s the takeaway from that experience? You were inside of this company; you had a chance to go to the other side of this issue, right?…

Yeah. I loved my time at the company. That’s the truth. I really enjoyed it. I learned a lot; I met some really great people; and at the end of the day I just wanted to be at an institution that – I wanted to be in a place or do work that was more independent.

Do you feel like you made an impact there at Facebook?

I think I made more of an impact since leaving Facebook.

Regulating Big Data

…Do you think that if left to their own devices, they’re going to make the right decisions?

I think that any company will grow in the direction of highest profit margins if left alone, and I think that we do need to revamp our regulatory regime to really treat the problems, the disinformation problem, the hate speech problem, the privacy issues that we’ve seen. We need to revamp the way that our government works to respond to the harms that the industry has wrought.

Do you think that the company has – are you confident that they’re taking responsibility for what’s happened and that they actually are serious about getting it right?

I mean, it’s hard for me to – I feel like it’s a character attack against my former colleagues if I say that they’re not. That’s why it’s hard for me to answer these questions.

…I guess I’m just wondering, having been on the inside of something, having seen it, having been a part of that culture, albeit for a short amount of time, what do you think needs to change about that culture in order for there to be a more sustainable future here?

I don’t think anything needs to change about the culture. I think change needs to be forced from the outside. I think that the culture that has been adopted at Facebook or Twitter or Google is all natural.

In what respect?

Well, I think that any company, if left alone to its own devices, will grow like photosynthesis in the direction of where the most money lies. That’s begat by the culture, by the leadership, by the commercial goals that are set by the company, and I think that if left alone, they will continue to kind of be the way they are across the industry. And to treat that, we need to do things differently from D.C.

Facebook In The 2016 Election

You were there during the election, right?

Yeah.

What was it like for you during the election period to be inside of Facebook?

Well, I think that during the election period there was a lot that was happening on the platform. Off the platform there was a lot of news coverage of things that were happening on Facebook and ways that political communicators were advertising on Facebook and implications of bias. It was a very noisy period of time from the perspective of somebody who would have worked at the company. And I think that that was, that that cacophony was really mind numbing at times.

…What’s mind numbing mean?

Well I mean, you get to the point where you’re hearing stories or seeing stories about the platform in The New York Times every day, and you start to, your mind starts to shut off to the implications that are being drawn.

…Did it bring up questions of what role this company is actually playing in our democracy and society for you?

Absolutely. I think the company has an undoubted impact on American democracy and society globally, and I think some of the coverage that we were seeing in the public drew implications about the platform and the way that our algorithms worked that were really challenging for all of us and really forced us to start thinking about new problems.

What do you mean about how the algorithms worked? What were the sorts of things that were being leveled at the time that would cause you to rethink how your algorithms were working?

It started with implications in Gizmodo in early 2016 that there was a liberal bias in the Trending topics feature at the platform. I think what that forced in the months after that was an integration of computer review instead of human curation, an integration of algorithms to determine what kind of content gets shown. Of course that all would beget later the fake news problem and the disinformation problem that emerged on the platform, or at least in the public in the months after that.

…Was that something that was starting to be, that you started to rethink as a good thing in terms of what the algorithm was engineered or designed to do?

Well, I started thinking about this in that period, absolutely, that was it possible that the News Feed and, more broadly speaking, the platform and the set of practices in the industry, to try to house us in different groups and feed us the content that the industry thinks that we’ll find most relevant and react to and click on the ads for? I started thinking: is this an issue around that period.

Is this an issue, meaning is this actually something that’s good for us?

Absolutely. Yeah.

And what was your determination? What did you think?

Well, I think that personally, I thought that there was a deep-rooted problem that if we are creating these kinds of filter bubbles, and bad actors infiltrate the platform and take advantage of the filter bubble effect, what are the implications for our democracy? What are the implications for the individual to judge what’s right and what’s wrong, what’s true and false? How does it create economic pathways for nefarious activity? Those are the types of things that I think many of us were thinking about.

And do you remember, during the campaign, what sorts of things do you remember specifically reading about and thinking about during that time about Facebook’s algorithm and role in the dissemination of information? What were some things that popped out at you at that time? If you could situate me there.

Well, I think that those of us who were involved in politics at the time knew about Facebook’s central role in the dissemination of political communications. That was true of the Trump campaign in particular and especially. And I think that this was just part and parcel of the political reality that we live in now, that a lot of eyeballs are gravitating toward digital platforms, and the people that can master advertising and content curation and dissemination on those platforms are better poised to win politically.

Look, most of us don’t have the experience of being inside of that thing while it’s going on. What was it like reading this or thinking about these issues as you’re inside? What was that experience like for you as this is happening in real time? The Trump campaign is utilizing Facebook as their primary communications, advertising platform, and what were your feelings then?

My feelings were simple. They were how can all political campaigns, including the one that I was trying to help, leverage this platform, because this is where consumers are going; this is what people are looking at. How can we take advantage of this to the greatest possible extent?

When you say “help,” you’re referring to your time when you were with Hillary?

Yeah. I wouldn’t say “with.” I was not a campaign employee. Officially speaking, I was a volunteer.

OK. Were you still an employee of Facebook at the time?

Yeah.

What were you trying to bring to the Hillary campaign as a volunteer?

Well, specifically what I was trying to do is help the campaign develop a technology policy platform. That was along with a wide group of experts from industry and public interest groups and so on and so forth. We were all contributing to the technology policies that the campaign was working on at the time.

And were you seeing the Hillary campaign utilize Facebook to its full effect?

I don’t know. Well, I don’t really have much context on that.

Were you seeing how the Trump campaign was using Facebook at the time?

No. No, because I mean, I wasn’t subject to those ads.

What was it that you were learning over time from the outside about Facebook’s role in the election?

…Well, what I was learning about was the tremendous capacity with which the various campaigns were using platforms like Facebook to try to push advertising, I think especially the Trump campaign.

And was that of concern to you?

No, it wasn’t. I mean, I don’t think political advertising is of concern to me, whether it’s on digital platforms or not. Yeah.

Were you still at the company when news broke that the Russians may have interfered with the election using Facebook?

I think so. I can’t quite remember.

When that news broke, what was your thought about Facebook’s role or responsibility?

When the news broke, I really felt as though there’s something that is deeply wrong here, and there’s something that both the company and the industry as well as all the stakeholders involved in our democracy really need to think about in the way forward, when we think about political advertising and the role of social media and the Internet. We need to start thinking about this whole ecosystem in a much better way.

Assessing Facebook’s Response

…How did you feel about the company’s response to the news that the Russians had gamed their platform during the election?

Well, I think that the company’s response to it was appropriate. It revealed as much as it knew at the time it knew it and as much as it could reveal without creating insensitivities.

When all this happened, here you’d been at the White House just a few years earlier, right, and the threats of data were not necessarily – or the threats of giving up and abdicating our privacy were not necessarily clear, was there some sort of light bulb in your head that goes off and says, “Oh, s—, this is one of those things that we didn’t really protect against”?

I think when I heard about the Russians, and when I ultimately found out more about how disinformation operators and even legitimate political communicators were using data and using the targeted advertising pathways within both Facebook and other companies, I was pretty concerned, and I think that concern drove me to want to try to develop my own thinking about the whole industry and this whole set of practices, because what I knew is that nefarious actors can start using that data and start identifying certain sectors, certain pockets of the American population, 100 people here, 1,000 people there, 5,000 people somewhere in Texas and 3,000 in New York somewhere, as people who were of a mind that if they see disinformation, if they see a falsehood or an egregious story about one of the political candidates were likely to click on it and reshare it and generate a vicious positive feedback loop of sharing of misinformation and disinformation.

Basically enabled though by Facebook.

Facebook and other companies, I think. But I think it’s fair to say that Facebook sits at the center of this problem.

And did that for you cause kind of a moment of conscience about working there?

It’s hard for me to go there. It’s hard for me to go there.…

You’re telling me that Facebook’s role in the election had nothing to do with your departure?

It did, but that’s more of an academic point. It wasn’t about working there. It’s not that I didn’t want to work there anymore because I thought it was a terrible place, or I thought it was too noisy and I couldn’t think or anything like that. It’s just that I wanted to be independent, and I did want to study that problem from the outside. But it wasn’t that there was a certain set of bad actors inside the company or something like that.

The Russian use of Facebook is exactly what Facebook is designed to do, isn’t that right? Isn’t the way that they used the platform exactly what it was designed to do?

Absolutely. I think that – well, let me step back for a second, because I don’t think the Russian interference was that sophisticated in 2016. I don’t think they leveraged the filter bubble then. I don’t think they leveraged targeting as effectively as they could have. I think they will in the forthcoming elections. Yes, absolutely, I think that sophisticated disinformation operators are using this whole targeted advertising ecosystem in the exact same way that a legitimate commercial advertiser would. Therein lies the problem, and I think that both the company and government and other companies need to do a much better job of figuring out who those bad actors are and making sure they don’t infiltrate a democratic process again.

And you think that that’s possible to do?

I do think it’s possible to do, yeah.

Do you think that Facebook should have been more aware of that problem when it was developing back during the election?

I don’t know how aware of the problem it was, so I don’t know.

But were there safeguards in place at the time at Facebook to weed out bad actors from using their platform?

That I don’t know either. Yes. The answer to that is yes. The question is which bad actors are we talking about? If we’re talking about foreign disinformation operators, the answer is I don’t know.

Facebook And Filter Bubbles

You’ve brought up filter bubbles a few times. What’s a filter bubble, and why should that be something of concern? What does that mean, to exploit a filter bubble?

Well, a filter bubble is a social media bubble. It’s a group of people that are using a social media platform who are all determined to be of a like mind by the platform, let’s say Facebook or YouTube. It could be that they are all dog lovers, or that they’re all people who are prone to vote Republican, or whatever it might be. The problem that I see here is that the filter bubble, whether it’s dog lovers or Republicans or Democrats or whatever it might be, the filter bubble is leveraged by the advertiser, and when that advertiser is a malicious actor and wants to leverage the fact that these people are all Democrats or these people are all of a low socioeconomic class, or these people all live in this particular neighborhood in New York City, we tend to have a problem, because it is true that there are signals that can be created within these social media platforms to draw out like audiences, audiences that are very homogenous in their thinking on a particular issue.

When you can economically drive information, feed it to that filter bubble, and all you have going for you is money, and you don’t really have any commitment to ethical advertising, and in fact you want to try to convince people to do something that is not in their interest, we have a problem. And if the platform that is hosting all of that activity doesn’t do anything about it, again we have a problem. This is why I say that I think we need to rethink the way that our regulatory regime operates.

Isn’t the entire system and the algorithm designed essentially to create these filter bubbles or to segment us all into whatever engages us?

Absolutely. I think that the entire system is created that way, and in fact all companies within this sector are trying to get to that point. They are all trying to drive innovation in the direction of targeted advertising…. The difference between broadcast advertising, let’s say, is that targeted advertising includes the advertisement, which is display or video or whatever it might be, and it’s targeted at an individual, meaning that the advertiser knows something about the individual, knows that he or she will be interested in a particular shade of shoes, and can therefore target them.

I lost my train of thought.

If that’s the fundamental design of the thing, then how can you basically separate the algorithm and that goal from Facebook?

Well, it’s going to be very difficult. I think that the industry is all trying to get to that point, because whoever can have the biggest audience and guarantee to the advertiser that the advertiser will be able to reach a targeted audience most effectively, meaning with the knowledge that that audience really is interested in the product that’s being offered, whichever platform can do that most effectively is guaranteed to get the most money, is guaranteed to be the most commercially successful in this ecosystem. It is all driven by targeted advertising because it saves dollars for the advertiser.

When the news of Russian meddling on Facebook came to light, did it reveal a fundamental flaw with Facebook to you?

Not really. I mean, I think this whole ecosystem is designed to help advertisers reach their audience in an economical way. I think that’s fine and well. I think the flaw that Russian activity on these platforms revealed is that the platforms, at least at that time, were not good at detecting, for example, foreign election interference, and that’s a problem. I think platforms need to get much better at that and make sure that bad actors don’t infiltrate it.

…You said that the Russians were not that great at targeting, and you expect them to be better at 2018. Can you explain what you mean by that?

…Well, the Russian disinformation operators did not spend that much money on Facebook. Additionally, while we don’t have full information, the public does not have full information, what we do know is that they didn’t engage in tremendously sophisticated ad targeting or content targeting in any way. I think what is dangerous and what is scary is that they are learning about these platforms. These platforms are continually evolving and integrating more and more artificial intelligence as we speak as well, and that can spell trouble for us down the line as the midterms come up and other elections.

How so? What kind of trouble? What’s the threat here?

Well, I think the threat is that the disinformation operations, whether they’re Russian or Chinese or wherever they might be emerging from, can gain in sophistication; that is that they might collect more data and understand more about 87 million Americans or 180 million Americans and section us off into filter bubbles and start feeding us disinformation in ways that is cheap for them to generate organic content, and can really infiltrate these platforms in a really major way. I think that to be fair, the industry is doing everything that it can, starting with Facebook and moving on, to try to protect both the platform and the people from that kind of activity, so it will be a dogfight.

originally posted on pbs.org