The Facebook Dilemma | Interview Of Roger McNamee: Early Facebook investor

The Facebook Dilemma | Interview Of Roger McNamee: Early Facebook investor
The Facebook Dilemma | Interview Of Roger McNamee: Early Facebook investor

Roger McNamee was an early Facebook investor and is the author of Zucked: The Education of an Unlikely Activist, due to be published in 2019. This is the transcript of an interview with FRONTLINE’s James Jacoby conducted on February 26, 2018. It has been edited in parts for clarity and length.

Let’s start back in the day. What was the first you heard of Facebook?

I was working for years inside Kleiner Perkins Caufield and Byers, which was in the ’90s the hub in Silicon Valley of the internet when it took off. I remember the day that Marc Andreessen first brought Mosaic software, which became Netscape, into the company. I remember when Jeff Bezos came in with Amazon, and Kleiner did a company called Friendster, and Friendster was going to be, not the very first social network, but the notion as it was going to be the one that made that category real. They got to a million users, but they never got the technology to work properly.

But it gave me the idea. This is actually a really, really good idea. So I started to keep my eyes open, looking for things that would provide human connection, use the increasing surplus of bandwidth and the surplus of processing power to do things other than treat computers as tools. And there were a series of people who tried to do that. The first one that really broke out was MySpace, and the problem with MySpace is that, well, there were a couple, but they didn’t understand that the fact that there was plenty of bandwidth around didn’t mean you were supposed to waste it.

By giving users the ability to customize their pages, they made it so that for some pages, the load times are completely unacceptable. But the real problem was they allowed anonymity, and this allowed adults to prey on children, and they had a lot of really bad examples of that happening. So I knew what not to do. And then all of a sudden we became aware that there was this kid at Harvard who had created a new product.

By the time it hit my screen, it was because a guy I knew, Jim Breyer, the lead partner at Accel Partners in Palo Alto, [Calif.], had made an investment, and the company went from a raw start to that investment practically overnight. The idea was so good and so well executed that it became a real thing almost overnight.

What Zuckerberg did that his predecessors hadn’t done is he focused not just on a streamlined design, something that would work well at any scale, but he also insisted on real identity, and he offered the use of the ability to control their own privacy settings. And that combination struck me as being the gold that was going to finally make this a real category, and there was no reason once he took Jim’s money that he was going to take money from anybody else. I think it was something like $14 or $15 million that Accel put in.

It was enough money that they were going to be able to go a long way, because the model they were doing wasn’t going to cost gazillions of dollars. But I was really curious, and I was hoping at some point I’d get to meet them and find out more about what they were doing.

Facebook’s Beginnings

What was so exciting to you about the idea of a social network? Why was that a novel idea at that time?

I grew up in a personal computer industry where there was never enough of anything. The processors were too weak to do what you really wanted to do. There wasn’t enough memory. There wasn’t enough storage. Being an engineer meant living in a world of tremendous constraints.

…By the late 90s, we finally reached a tipping point where there was enough processing power, enough storage, enough network bandwidth to do all of the functions that had made personal computers successful, which were really personal productivity tools, so the digital equivalent of hammers and screwdrivers. With that surplus you could now imagine doing things that were fun, interesting, different. You could imagine focusing on interpersonal activities. To me the idea of a social network was that for the first time computers could look at things that were important to people other than outside their work, you know, something much more robust than email, something much more interpersonal than just watching a video. Social networks offer that promise, and I didn’t know exactly how that was going to turn out. But my thought process was that there were a lot of people in the world who have friendships or at least acquaintances that they would like to maintain and who didn’t have enough time to do it well.

I looked at this primarily as an adult market. I thought about parents and grandparents, this idea that there are all these life events, whether there’s a child being born or going to elementary school or graduating from high school or college or getting married, things you want to share with a large number of people that you don’t talk to very often, and would it be possible to create some kind of a product that did that? That was the promise of social networks, that they could fill that void.

And the irony, of course, when Mark Zuckerberg began, is he started from a platform of addressing the needs of college students and then high school students, who in my mind were the people for whom this had the least value. They had the most time; their relationships were deepest. While for them it was mostly a tool, for adults I thought this might be game changing, and it was, and in many ways that I didn’t anticipate at time.

What was it that you were hearing about Zuckerberg himself before you actually met?

Interestingly enough, I come from the era where engineers were all deep nerds, right? And that worked really well for me. That’s my tribe. I’m known to be nerdy myself. And I was.

When I first met Mark Zuckerberg, I knew almost nothing about him. What I knew was that he was a kid from Harvard who had started a digital equivalent of the student directories at Harvard, and he expanded that to something that was being used by college students and high school students, not just around the country but even outside the United States.

I knew almost nothing about it. There was a whole history of his experience at Harvard of his first product where he was almost expelled because he’d taken the photographs from – essentially he had taken photographs that weren’t his out of college services, and he had ignored a lot of rules in making this thing, and it had been so successful that it got to the attention of the authorities, and they came down hard on him.

Then there was the whole problem with the Winklevoss twins that followed that where he had – you know, the circumstance is always muddy, but there had been some issue in the relationship there. None of that was known to me at the time. When I met Mark, I met somebody who was doing something really, really cool, who by all obvious signs was a classic tech entrepreneur – you know, really nerdy, really focused, incredibly intense, but at the same time likable.

Meeting Mark Zuckerberg

What were your initial conversations about? Tell me your role at that time and what were you talking about.

When I met Mark, it was just a pure accident. One of his senior colleagues, Chris Kelly, who was the chief privacy officer at Facebook in the early years, sent me a message saying: “My boss has an existential crisis, and he needs to talk to somebody who’s been around a long time but who has no conflicts. Would you mind taking a meeting with him?” I leapt at the opportunity.

I do this. In those days, I did this a lot. I’d already been around 25 years or so, and I had learned years earlier that if you can help an entrepreneur early in the life of a company, even some simple way, you can have a foundation for a relationship.

When I got the call to do the meeting with Mark, I knew there was no investment opportunity. This was just an opportunity to get to know a 22–year-old entrepreneur who was creating an idea that had all the potential in the world. In my mind he had already broken the code. He had laid the groundwork for something that could be immensely successful. And I thought, “well, hey, if I can help him out, how cool is that?” Right? I thought that was really great.

So you get the call from Chris Kelly, and basically what – bring me into that initial consultation.

So Mark comes to my office, and in those days I was a co-founder of Elevation Partners, and our mission was to invest at the intersection of technology and media, and we covered a pretty wide range of the scale the company worked in, not raw startups but private companies in the expansion stage of the venture capital world all the way up to private equity transactions in mature tech companies who are looking to re-establish themselves or change something in their situation.

We had one conference room that was configured like a living room. Actually, it was more like a den. It was set up with a giant video game console, because one of our partners, John Riccitiello, had been president of Electronic Arts (EA). I’d been one of the early venture investors. We were convinced that video games was one of our core areas, and we’d set this thing up as essentially a playroom. It had this really casual furniture in it. There was no table, giant video game console.

Imagine the walls were covered with huge posters from Pixar movies and other pop cultural things, and Mark and I were sitting on comfy chairs, maybe three or four feet in between us, and, you know, we said hello. He sits down; I sit down. I said, “Before we start, Mark, you have to give me two minutes.” … So I start off with Mark and I say: “If it hasn’t already happened, either Microsoft or Yahoo is going to offer a billion dollars to buy Facebook, and everybody you know – your parents, your board of directors, your management team, your employees – are all going to tell you to take it. They’re going to say: ‘Mark, you’re going to have $650 million your own way. You can change the world with that kind of money.’ Your venture capitalist Jim Breyer is going to say: ‘Hey, I’ll back your next company. It’ll be even greater than Facebook.’ And I want you to know, Mark, I think you should follow your vision. If you believe in the vision that you had when you created this company, you should see it through. I believe that Facebook will eventually be bigger than Google is today and that you have broken the code by focusing on true identity and giving people control their privacy. You have something that’s going to be universal. In fact, I really think it will be more popular with parents and grandparents than it is with high school and college kids, and that those people are going to be way more attractive to advertisers, and so the business will be worth more.” What followed that was the longest, most painful silence of my professional career. He spent I’m going to guess about five minutes trying to decide if he trusted me, and he pantomimed a whole series of essentially thinking poses, you know.

At about the three-minute mark, I thought my head was going to explode. You really can’t appreciate how slow time moves until you’re one-on-one with somebody you care about who is obviously trying to decide if they trust you and isn’t saying a word. At the four-minute mark, I’m biting my teeth and just hoping I can get through this. And finally you see him relax,

and he looks at me; he goes: “You’re not going to believe this, but everything you just said has just happened. How did you know?” And I said: “Well, actually I didn’t know. I’ve been doing this a long time, and I know how Silicon Valley works, and I know these companies, and I know a lot of the people around you.” And I said, “It’s just Occam’s Razor. It was the simplest explanation for what might be going on, and it would have explained why Chris would want me to meet with you.”

And when you say, “What happened?,” what had he been offered at that point?

He’d been offered a billion dollars by one of those two companies. And I said, “Do you want to sell the company?” And he said, “I don’t want to disappoint everybody.” And to me that was a beautiful moment, right? I could completely relate to that issue of [that]. You have a vision, and all these people supported you, and all of a sudden they’re saying: “Hey, this works. Congratulations. Let’s call it a day.”

And I said: “That’s not the question I asked. The question I’m asking is, do you want to sell the company?” And he said, “Well, no, I’d actually like to see if I can’t make this thing work.” And I said, “Well, OK, let’s see if we can figure out a way to do that, because,” I said, “in the long run, if you make Facebook work, everybody is going to forgive you for not taking the billion dollars. In fact, if you make Facebook work, they’re going to be really glad you ignored them.”

And what was weird was it only took a few minutes. We just reviewed the way that the company’s corporate governance worked, and it turned out Mark had what’s called a golden vote. He had the ability to determine the outcome of any board vote simply by whatever he thought. So if it was a 3-2 vote, and Mark was one of the two, the two win. It’s a very special situation, and it basically meant he controlled the outcome no matter what anybody else thought. We then spent a couple of minutes, not very long, just framing how he would share the message with everybody else, because it was really important.

To sell it at that point was a tragedy, because Microsoft or Yahoo, whichever one it was that was offering the money, would never be able to follow his vision properly. They would inevitably kill the company, and that had happened so many times historically when big companies buy little companies that I told him, “You should just feel confident and articulate it,” and I gave him a couple of thoughts about how to do that. The entire meeting lasted less than half an hour, and when he left, we had the beginning of a relationship.

What was what was the vision?

His notion was he wanted to connect the whole world and let them share the things that were important in their lives. If Google was about providing you access to all the information, Facebook was about connecting all the people.

Facebook’s Mission

…Actually, because I don’t want to get too bogged down in Google, but just help me understand, bringing it back to Facebook, how the centralization happened.

…Facebook saw what Google was doing and went, “Wait a minute; we can do that, too.” The social network, because it focused on human emotion and human interconnection, had data in it you couldn’t possibly accumulate on Google no matter how many apps they put together, and Facebook realized they could add apps around what they were doing, things like Messenger and Instagram, that would give them a big piece of the internet, privatizing their own space.

Next door to Google, and between the two of them, they’ve essentially blocked such a large portion of the internet that every content vendor, whether it was newspaper or television company or blogs, was forced to go through them to reach customers. Essentially users saw the convenience of the free applications provided by Google and Facebook as so compelling they didn’t worry about whether there were any downstream costs. They simply adopted them. They loved them. They would not move off of them.

And the effect of that was if you are a business, if you were a publisher trying to reach those users, you were forced to go through Facebook; you’re forced to go through Google, on their terms. And this was going to have enormous implications for how media worked as we went forward in time.

And tell me, Silicon Valley at the time, right, had you heard at that point in time the word “disruption”?

So the term “disruption” I credit to a blog called TechCrunch which created a thing called Disrupt, and I don’t think they invented the word. You know, there were people who were using the word before that. In the ’90s and before that, disruption was viewed as a byproduct of doing something really great. It wasn’t until the mid-2000s that people started to refer to disruption as an end unto itself, that people started creating business plans that were designed to disrupt businesses without any thought to the downstream costs that might be embedded in that. I think, if you will, the word “disruption” is coincident to the value shift in Silicon Valley away from making the world a better place to making entrepreneurs and investors much wealthier.

“Move Fast and Break Things,” right? Was that something – when did that come about? When was that something that you heard about?

When Mark Zuckerberg was at Harvard, I think, he was really fascinated by hacker culture, this notion that software programmers could, using their wits, do things that would shock the world – hopefully surprise and delight, but shock was enough.

So it was a little bit of a renegade philosophy and that it had built into it a disrespect for authority and certainly a disrespect for that which had come before it. It was this notion that “I’m better than whatever came out before it, and I’m going to show you in the products I create.” That mentality led to the Facebook motto “Move Fast and Break Things.” When I first heard that term, Facebook was really a little company, and it didn’t seem to be sociopathic. It seemed to be, well, young kids, hacker mindset, just being clever.

For better or worse, I think the company took “Move Fast and Break Things” as a genuine mantra, as a mission, and the benefits of it are obvious in the company’s success. The downsides are only becoming apparent to us now – and they’re very, very significant. You know, I think that it wasn’t that they intended to do harm so much as they were, at least in the design of the product, unconcerned about the possibility that harm would result.

Facebook’s Business Model

It’s very helpful. Was there a business model when you came, when you were first onboard?

One of the things that was extraordinary about Facebook was that they were selling advertising space almost from the beginning of the company. In the year prior to my first meeting with Mark, they had generated $9 million in advertising revenue. The problem was there wasn’t a business model. What there really was was an effort to generate revenue from this site but no clear plan on how to make that advertising either effective to the advertiser, attractive to the user, or sustainable to the company. It wasn’t a repeatable process.

…The business model we see today was created by Sheryl Sandberg and the team she built at Facebook, many of whom had been with her at Google.

The thing to understand, one of the things that made Google so unusual was when it started, the founders didn’t want to go to an ad model because they were afraid that would compromise the site relative to the interests of the user. But when they did decide to have an advertising model, the people they chose were three women: Marissa Mayer, Susan Wojcicki and Sheryl Sandberg. Sheryl had been sort of the business side of creating AdWords, and the three of them created this amazing thing that was every bit as amazing as Google was, because the ads were so useful to everybody. I mean, when you’re looking for a product, the notion that you’re going to see some ads of people who are actually selling it, that’s useful to the user.

And obviously if the person selling it, the idea that you can reach somebody at the moment they’ve decided they want to buy something, that was useful. And so AdWords was this incredibly – it was a virtuous form of advertising. And when it became clear that the deal with Microsoft was not sustainable, I started to talk to Mark about what would come next. And the problem was that Google isn’t Facebook. It’s a really different kind of business. It’s not about emotions; it’s about purchasing intentions. So when I thought about this, I realized, there is no perfect analog, but the closest thing is AdWords, and Sheryl Sandberg was the business person. If Marissa was marketing and Susan was sort of the overall conceptual thing, Sheryl was a person who had focused on building the team that would sell and make AdWords successful. And I thought, well that’s going to be a huge part of what they do at Facebook, creating a repeatable process. And she had done that.

Sheryl Sandberg

[How did you meet Sheryl Sandberg?]

It’s January of 2001, The Clinton administration is over. Sheryl Sandberg is done being the chief of staff to the secretary of the Treasury. She’s looking for her next gig. She comes out to Silicon Valley. She’s staying with her sister, who is married to my colleague, and comes and hangs out in my office at Kleiner Perkins as part of Integral Capital Partners. Her initial thought is she’s going to become an investor, and she’s looking around, and she’s very interested in my approach to investing, and I think she’s amazing – I mean, literally one in a billion; I mean, extraordinarily, just a brilliant person, really insightful, really strategic, really personable. I think she would have made the world’s greatest investor. My partner, John Powell, says: “Roger, this woman is one of the greatest anywhere. We cannot ruin her by having her come here. She could change the world. We’ve got to introduce her to Google.” Of course that was dead easy to do, because Kleiner Perkins had been one of the two investors in Google, and John Doerr, who had done that investment, was three doors down from my office. So Sheryl spent some time with John, and one thing led to another, and she goes to Google.

At Google, Sheryl is part of the team that built AdWords, their incredible revenue engine. So we scroll forward to 2007. Mark Zuckerberg is trying to figure out, there has to be a better way to monetize Facebook than this indirect relationship with Microsoft, and it was pretty clear his team was not coming up with a better way of doing it. He was going to go outside. He was starting to think about bringing in a partner.

Now, I looked at Mark, and I saw something about him that you don’t normally see in Silicon Valley, which was somebody who would be comfortable working with a woman partner. Mark’s mother is a physician. He has nothing but sisters. I had this instinct that he would be totally cool working with a woman, and then a miracle occurred. Sheryl Sandberg comes by my office for just a strategic conversation. She goes, “I’ve been offered a job to lead The Washington Post Co,” essentially the newspaper side of The Washington Post, “for the Graham family, and I want to get your point of view on this.” I looked at her and said: “Look, nobody is a bigger fan of  The Washington Post than I am. I mean, Watergate, Pentagon Papers. This is a national icon.” But the newspaper industry just looked like it was in a death spiral, in part because of what she’d done to Google, right?

She had essentially forced all publications, all journalistic publications, through Google to reach customers. And I said: “Look, if you’re going to take that seriously, I want you to think about something that I think will be wildly more successful. I’d like you to go and meet Mark Zuckerberg and think about going to work at Facebook.” I hadn’t yet talked to Mark about this. I [had] thought about it, but I hadn’t yet talked to him about that. …So Sheryl looks at me and says: “I don’t know. He’s really young. He’s male. Do you think it will work?” And I said: “Look, just meet with him. See,” I said, “I have this instinct that it would work.” And she was appropriately nervous. I mean, Silicon Valley has a well-deserved reputation for misogyny, and it was getting worse, not better, at that time. But she went and met with him, and I think they hit it off. By the third or fourth conversation, I think they were having a really, really, really good relationship.

So long story short, in March of 2008, Sheryl joins the company as chief operating officer, which was a huge deal, because, from Mark’s point of view, that freed him up to focus all of his energy on Facebook’s strategy without having to worry about paying the bills and making the trains run on time. That was going to be Sheryl’s job, and it is no exaggeration to say she did that brilliantly, better than it had ever been done before and that Facebook created this extraordinary business very, very rapidly.

You mention that Sheryl was kind of a “one in a billion.” Just give me a little context. Why is Sheryl special?

When I first met Sheryl, the things that struck me about her, she was incredibly modest yet so brilliant. It was a little scary. I mean, her analytical abilities are as good as anyone I’ve ever seen. Her communication skills are as good as anyone I’ve ever seen. Her ability to show empathy was amazing. And there are other sides to Sheryl I didn’t see on the first blush, but it was, from the first time I met her, it was obvious that this was an exceptional human being, and at an amazingly young age she’d been the chief of staff to the secretary of the Treasury. I mean, that is a huge job, particularly at that time. And in the midst of all that, she’d carved out the special space helping Bono forgive debt in portions of the world where they were never going to repay it anyway and where forgiving the debt was going to give giant parts of the developing world a chance to actually develop.

The notion that I had the privilege of meeting her was – it was like one of those things. I had one of those jobs where every once in a while, I get to meet really amazing people, and here was one of them. And as Facebook developed, it became obvious she was one of maybe the three most capable business leaders I have ever encountered.

Now, keep in mind that I knew Steve Jobs really well; I knew Bill Gates really well; I knew Jeff Bezos really well. And she’s in the top three I’ve ever worked with in terms of just business skills, which means one of those guys isn’t on that list. And the thing that’s difficult for me now, looking back on it, is that it turns out she had a blindspot that I didn’t see then, and only it’s become clear since then. But we have blindspots, so that in no way reduces my estimation of what a talented person she is. It just – you know, you always want to catch your blindspot before it does any harm. In this particular case, nobody was able to catch it fast enough.

Facebook’s Key Insights

We’ll come back to what her blindspot was, but I’m just curious what you know. Can you tell me what it was that Sheryl built there?

To understand how amazing Facebook is today, it’s important to understand what was different in their time from the time that came before. For 50 years, the technology industry had struggled to develop enough computer processing, enough storage, enough bandwidth to actually solve the problems that customers had. By the early 2000s, there was enough of everything that you could start to think about what else you might like to do. You could start to think about not just being a tool, but enriching the lives of the people who use your product.

Essentially Facebook was the first really great test case of that idea that connecting humans to each other is that’s not the job of tool; that’s an opportunity to do really, really great things. And when you create an advertising engine underneath that, you create a set of incentives. The way I think about it is that in an advertising business model, the audience, the readers, the users, they’re not actually your customers; they’re your product. The customers are the advertisers. So if you want to be critical of the approach Facebook took to advertising, it would be that in the transition from a world of scarcity to a world of surplus, when they could have focused on anything, they still treated Facebook as a tool for advertisers, which effectively made the users the fuel for that engine. They made a product that was a better tool for advertisers than anything that had ever come before it.

Essentially, they had this pipe into the brains of now more than 2 billion people, knowing the emotional state, more or less in real time, of all of their users, and they knew what mattered to them.

They knew who they knew, where they were going, what they were doing. Facebook systematically went from just interconnecting people to essentially having a surveillance system of their whole lives, and one of the really important pieces they did early was a product called Connect that allowed you to use the authentication of Facebook to log into sites all over the internet. It was amazingly convenient. Like everything else Facebook did, it was free. But it also meant that Facebook knew where you were and what you were looking at when you were off of Facebook, so as a surveillance tool it was extraordinary. Facebook had a bunch of other insights. They realized that while they were interconnecting people, they were getting a sense of the emotional hot buttons, and they took an adage from the earliest days of tabloid newspapers, this notion of “If it bleeds, it leads.” This idea of appealing to people’s lower-level emotions, things like fear and anger, would create greater engagement, and in the context Facebook, more time onsite, more sharing, and therefore more advertising value. And they realize that because they were wired into 2 billion or more people, they had the ability to customize a channel for each person. … Facebook made 2 billion individualized, highly personalized channels tuned to the emotional state of the user, and the key insight that they had was that they could change the emotions of people. Essentially they would be in a situation where if they appealed to fear, they appealed to anger, if they followed the old newspaper adage “If it bleeds, it leads,” they could increase emotional intensity, increase engagement, more sharing, and therefore create more value to advertisers.

They combined this with a second insight that Google had at more or less the same time, which was that rather than just giving people unfiltered feeds of their family and friends, they could tune the posts that they showed each user in a way that would ensure higher levels of fear and anger, higher levels of engagement emotion. This involved something that the great Eli Pariser calls a “filter bubble,” this notion that by giving people what they want, you’re giving them their own reality, and essentially taking all of their pre-existing beliefs, confirming them over and over again, making them more rigid, making them more extreme, and making them effectively more emotionally engaged.

…Facebook and Google didn’t invent filter bubbles, but they deployed them differently than they had been deployed in the past. In the era of network television, when I was growing up, we all watched the JFK assassination; we watched the Beatles, the moon landing, “Who shot J.R?,” the final episode of MASH, and those things were a shared reality. We all saw the same things. It was a common set of facts that brought the country together. Yes, it created conformity, but it also was part of the social fabric of the country.

For Facebook, the strategy was literally the polar opposite. The notion was, “Let’s find the things that make us different as individuals, and let’s inflate those in people’s emotional state in such a way that it makes them different from everybody else.” In effect, polarization was the key to the model. People in the middle were not economically interesting, so the goal was to find whatever the issue was – and conspiracy theories were an unbelievably great way of doing this – but find the thing that could trigger fear and anger, push on that by little degrees through the News Feed such that over time, they became more valuable to advertisers. They will be more engaged, and if you look at the numbers on Facebook, if you look at the user time onsite, engagement, sharing all those things, and then you look at the revenues the profits, obviously it has worked fantastically well.

The problem with all of this was that it’s the perfect advertising tool, but only so long as there are no side effects that cause harm, because if there are side effects, then the advertiser is suddenly associated with something that’s harming society. And in fact that is what has happened.

Essentially Sheryl was building this ad tool, this incredibly potent 21st-century tool. That’s what her role was.

Sheryl’s role was to build the entire advertising business of the company, and the tools are a key part of that. When they were just on desktop computers, the harm was negligible. What really changed the game was the iPhone and the rapid ubiquity of smartphones. … When Sheryl joined Facebook her job was to build a sustainable scalable advertising business. And keep in mind, this was at a time when the company already had revenue, and it was already prospering. So the question was, what were the choices going to be that they made? They basically focused on the advertisers as the customer, they used the user as the fuel to drive all of this, and they created something that had efficacy to the advertiser unlike anything we’ve [ever] seen before. The question that never got asked was, really, would there be any side effects of this model? Would tapping into the emotional state of all these people have any unintended consequences that would be harmful? You know, if you want to just look at it commercially, would they be bad enough to harm the advertisers who are using the platform? If you want to look at it holistically, ask could it actually harm society and the country and maybe the world? And the sad thing was not having thought about it early, and being so effective at executing the strategy, Facebook was very far down the road before any of those questions got asked.

Again, keep in mind we’re living in a time where there’s more computing power and more network bandwidth than you need to do these applications, so there were other ways they could have done this, but no one ever thought about it. And they were not alone in that problem. The industry was not mature in its approach, and interestingly enough, because we live in a time of surplus, Facebook didn’t have to go out and hire a rocket scientist of prior generations, the people with experience. They were able to hire mostly kids fresh out of school, so they didn’t have seasoned hands who might have said, “Hey, hang on; you could look at this a different way.” They were able to execute the strategy, [and] because it worked so well commercially, nobody really had an interest in questioning the methods.

And the methods. One thing that I want to get to is the targeting. One of the amazing things that they were able to do with this tool was to target and to surveil, to target, to manipulate. Bring me into that a little bit.

When you have more than 2 billion users and you essentially have their entire emotional profile available to you at any point in time, the opportunity to target, it’s there, it’s compelling, and it’s really valuable to advertisers. So Facebook put a lot of energy into refining the tools, really the automated tools that would allow people to select exactly the people they wanted to reach. … I’ll never forget the one time I went to an advertiser pitch at Facebook. I think it was about 2009, and they were testing a pitch, and they wanted my reaction to it. Essentially they were saying they could deliver to that advertiser the equivalent of a Super Bowl audience 365 days of the year. They could do it by scale, but they can do it with much greater precision than you could get by buying a television ad on the Super Bowl. I don’t know if they could really deliver that when they first offered it, but it didn’t take them long. They got there, and they’ve been there for a long time, and now they can deliver an audience bigger than that with extraordinary precision. And again, I don’t think it ever occurred to anybody that there might be something wrong with that; that in the wrong hands, that capability would produce not only unintended consequences but unintended disastrous consequences.

One of the things in this era, the early era, is this idea of kind of techno-optimism and data-optimism. Can you talk about that part of the culture out here?

When I arrived in the Silicon Valley scene in 1982, we were at the very beginning of the personal computer in history. The prior 30 years have been focused on the needs of governments. This was the era of mainframes. The mini computers had started in the mid-70s. They were the first things targeting the needs of businesses. … And in that era, it was easy to believe that the product you made next year would always be better than the product this year and that we were asymptotically approaching the outcome we were looking for, but we were never going to get there; that the needs of the user would always be greater than what the hardware could deliver. And that really didn’t change until 2000 or so, when finally the hardware and software caught up with those business needs. By then technology optimism was so deeply ingrained in the value system and in the beliefs of people in Silicon Valley that they’d come to believe it is akin to the law of gravity, that of course technology makes the world a better place. It always had; it always will. That assumption essentially masked a set of changes that were going on in the culture that were very dangerous.

Essentially once you got to an era of surplus, you no longer needed experienced people. So the demographics inside startups changed from seasoned professionals coming out of a field that they understood intimately to people fresh out of college making products for people their own age without any context, without any sense of history, and deeply imbued with a libertarian value system that had begun in the early ’80s in the country and had prevailed long enough at that point that young people didn’t realize the world had ever valued things differently. And the libertarian value system, you know, while fine on its own merits, can be very dangerous in the sense of a Silicon Valley entrepreneurial model, because it basically says you’re only responsible for yourself, and if you’re successful, it’s because you’re really smart, and you worked really hard, and somebody who’s not successful, by definition, didn’t work hard enough or isn’t smart enough to have been successful.…

Facebook’s Mission

In those early days and going forward, what motivated Mark? What motivates him, you think?

The mission of Facebook in the early days was to connect the whole world. You know, when you’re in a Harvard dorm, that’s a pretty outrageous idea, but by the time he came to my office that first time, that no longer seemed outrageous. The world was physically connected. There was a backbone. Everyone had access to the internet. Not everyone, but most of the world had access to the internet.

From the early days, Mark had this vision of connecting the whole world. And you say to yourself wait a minute, when you’re in a Harvard dorm room that seems like a ridiculous idea. But the truth is by 2004, there was a physical internet connection more or less everywhere. A huge percentage of the global population had access to the internet, and it was not crazy. Somebody was going to connect all those people. Why not him? I think when he began this, he had a goal that was attainable, and the further along he got, the more his team understood that interconnecting the world created business opportunities that you couldn’t possibly imagine in a Harvard dorm. You could not have imagined creating a messaging service that would allow you to do payments. You couldn’t imagine creating a marketplace that might be bigger than eBay. You couldn’t imagine that you’d be able to change the emotional state of 2 billion people. You didn’t imagine that your product could be used by foreign countries to influence elections or by other countries to condone really horrible behavior against their own population.

Was the profit motive a corrupting influence on the social network idea or ideal?

One of the things it’s hard for me to judge is what they were thinking in the years after Sheryl arrived, because my interaction with her was relatively limited after, say, the first six or nine months that she was at the company. Mark was focused on strategy issues, and I was mostly interacting with him. I would see Sheryl socially but in a business context. After sometime in 2009, I just didn’t see her much in that context. And the business model tracked by today really developed after that period, and I was, I mean – and I fault myself for this – I was so proud of what they were accomplishing that I didn’t feel a deep need to probe deeply.

I wasn’t asking philosophical questions then. In retrospect obviously I wish I had, but I did not. I think they were motivated by doing something no one had ever done before. When I said to them the first time I met, “I think you [are going] to be bigger than Google is today,” I don’t know how many people would have said that to him at that time. I am confident that he came to believe that that was a reasonable goal and that being bigger than anything else on the metrics that mattered was attainable.

The first sign that that was corrupting was at the initial public offering in May of 2012. … When Facebook went public, the offering itself was a social phenomenon; it was a cultural phenomenon. They were already huge, hundreds of millions of active users. But the IPO was the most extraordinary blast of free publicity, and up to the offering itself, all of that publicity was positive. … The IPO set all these records against a backdrop of uncertain business fundamentals – not a problem, but just they were making some transitions, so things were less predictable than they would have liked. And as a consequence, that uncertainty layered onto the artificially high price of the IPO produced several months of declining stock price. Now, what was fascinating was none of that affected the business. Essentially all that free PR had gotten people onto the site, and the people who bought the deal got hurt. But the business itself came through more or less unscathed.

But that was the first real sign to me that ego could have been an issue inside the company and that it might not be able to show perfect judgment. That is, in essence I was counting on Sheryl to be the judgment in all of these things. Her relationship with Mark was amazing because he gave her total freedom to operate that part of the business, but I think the quid pro quo was that she didn’t tell him how to be the CEO, and as a result, there was no internal person to speak truth to power. And as we’ve learned from time, the board was not doing anything constructive to speak truth to power. … They were accused of contributing to the problems in the IPO, and obviously there was nobody who stood up to suggest any of the things that happened afterward might have been inappropriate or less than optimal for the company.

When a company like Facebook goes public, does it change the incentive system? Does that change how a company fundamentally operates in that it’s now accountable to shareholders as opposed to its users or exposed to its advertisers? How does the IPO change Facebook?

I think that the IPO had less impact on Facebook than is normally the case in the technology field. …I look at this and go, what going public did was it created a barometer for sentiment about the future of the company. And once the stock bottomed, once it got past the IPO far enough that investors stop worrying about the uncertainty, they had essentially a nearly five-year period of uninterrupted growth. It was the equivalent of being on a straightaway with no pebbles on it where you could go hundreds of miles an hour. They were going as fast as the company could possibly go.

And in that period, they had every single day more users, more usage per day, more revenue, more earnings, higher stock price. They were in their own filter bubble, a filter bubble where all the news was good news. And if you go three, four, five years where literally everything you touch turns to gold, it’s very natural to get the sense that maybe you’re just better than everybody else. I think that’s what happened to Facebook. They had become this perfect, finely tuned engine of growth, and they subordinated literally everything else. You know, they would pay lip service to regulators; they would pay lip service to the health and well-being of users. They basically did everything they could possibly do to make the company more valuable to advertisers, to make the reach of the product greater, to essentially validate this notion that in connecting the world, they had become the most important technology company on earth. And that was clearly true. On their metrics, they succeeded maybe better than any tech company before them. The problem was in the metrics; that the things they had chosen to focus on, while extraordinary as a value creation vehicle, and also extraordinary because they were really the zenith of the libertarian laissez-faire economic model that has prevailed in the United States since 1981, they hastened the end of that era; that essentially they were so good at it that the world was forced to confront the dark side of internet monopolies and social media.

What were the metrics for them?

Well, I think it was very simple: users, minutes of use per day, revenue, earnings, stock price. Those metrics all were positive every single day. There were lots of secondary metrics that they would also follow, but those were the big ones. And honestly, as a shareholder, I follow those metrics, too, and I thought it was amazing, right?

Yeah. I mean. Your bank account was going up at the same time.

Everybody was, right? So you look at this, and it was easy and convenient to ignore any signal from the outside. That’s why in some ways I was so surprised when in early 2016 I noticed things not right. I was surprised that, why did I notice it at that moment and not some other moment? What was it? I think it is because it was around the election, and so much was at stake that it bothered me in a way that analogous things have taken place before didn’t bother me, and should have.

Avoiding Regulation

How skillful was Facebook in avoiding regulation?

The amazing thing is that Facebook faced practically no regulatory friction during its growth. They came along at literally the perfect time in that the internet was the last industry in the United States to get a really meaningful assist from a federal government. There is a section of the Communications Decency Act of 1996 that was created to protect them from litigation about intellectual property theft or pornography posted by third parties. That safe harbor essentially gave the industry a license to accept all forms of content from anybody irrespective of whether it was legitimate or illegitimate.

So, you have an environment where the governing philosophy is laissez-faire and where traditional antitrust has been replaced by a notion that there was no level of economic power that was inappropriate so long as consumer prices did not rise, so that by being a free service, Facebook was essentially allowed to consolidate every business around it. No one raised an alarm over Instagram or WhatsApp. Nobody raised an alarm over Messenger. And the European Union has begun to ask a lot of questions, but in the United States, they had free rein. And there have been situations, as with the Department of Housing and Urban Development, relative to the use or the opportunity to use Facebook tools to discriminate in housing in violation of the Fair Housing Act. But as far as I can tell Facebook’s response to those things has been, “We will fix it,” and then, “We have fixed it,” and at least some of the time they didn’t.

…On privacy, one of the things that’s interesting to me is that Facebook’s – help me understand how data and application of privacy is the life of this tool.

A few years ago, I had an insight that hit me like a bolt of lightning, and that is there must be some evolutionary need for convenience that humans, having come from a survival mode, look for opportunities to rest. So we favor convenience, even in situations where we know there’s going to be a downstream cost that you would certainly never accept at the beginning if that were part of the cost at the beginning.

So convenience is built in to all of the major internet platforms. Facebook, Google, and others deliver so much value in the moment that consumers can’t help themselves. You know, when you’re on a smartphone and you’re checking a smartphone 150 times a day, when you are interacting with a product like Facebook, half a dozen or a dozen times a day, you’re going to get addicted, and convenience is a big part of that. The fact that your phone is on your body, just pull it out at any time – “Hey, did I get more likes? Did I get some notifications? Have I missed anything?” – those impulses are really powerful, and if you make that stuff convenient enough, you can create addiction, and that became central to the Facebook business model. Again, it’s one of those things where it’s really good if all you care about metric-wise is profits. If you’re worried about society, if you’re worried about the public square, if you’re worried about, you know, if you feel any sense of civic responsibility, there’s some line short of where we are today where you would have hoped Facebook would stopped.

Eric Schmidt, the former chairman of Alphabet, former chairman of Google, when asked about privacy, he used to say, “At Google we go right up to the edge of creepy and never go over.” What he didn’t say, but which is I think indisputably true, is that all of these platforms have used convenience to move the edge of creepy further and further away from the standards we would have accepted five, 10, 15, 20 years ago, to the point where people willingly give up privacy, give up personal data in contexts where harm is inevitable. We live in an era where it wasn’t so long ago that Equifax allowed itself to be hacked and to have essentially the financial profiles of every adult in the United States put into the dark web. You take that data in combination with the kinds of things advertisers can do on Facebook or Google, and it’s not an exaggeration to say that it’s just an invitation for really bad things to happen.

And what happens is the convenience is so great that people just go fatalistic on you and go, “Eh, you know, there’s nothing I can do about it.” And my reaction is, well, actually there are things we can do about it, and maybe it’s time to have a conversation about them and when we should start.

People giving up data is essential for them, giving up their privacy, isn’t it? Isn’t giving up data privacy essential for their business model to thrive and grow?

It is absolutely essential for the model of Facebook or Google or others that consumers be willing to share personal data. The conversation we need to have is about the terms of that relationship. I mean, the assertion by Google and Facebook and others is that once the consumer gives them the data, the platform owns that data for all time. Now, I look at that and go, well, why is that? Is that actually a reasonable relationship? I don’t think so. I think there is a very reasonable argument to be made by Facebook and Google and others that they are providing a very valuable service in exchange for the use of data. I accept that. But I think there should probably be a statute of limitations about how long they get to use it and that after that, there should be some form of renegotiation. I think, humans being humans, most people are going to find some accommodation and work with the platforms to let them continue the data. But for people who would like to have their data back, who would like to have their privacy back, there should be some path for doing that.

The Europeans have a whole regulatory regimen coming online that seeks to do that, and it’s going to take years to get it to work properly. But that essential notion that people should have the right to be forgotten, that they should have the right to control or own their own data, that they should have the right, since they were the ones who contributed everything into Facebook, that their rights should be less, should be much greater than zero. That, it seems to me, [is] a conversation that is long overdue and that I would hope that the platforms would embrace, because the history of regulation in these environments is that while it’s painful on the first day, in the long run it’s really healthy for huge, successful companies to be regulated, that that establishes a more sustainable relationship to their customers and their users and in the long run works to the benefit of everyone.

The resistance, I think, from the platform is largely based on inexperience and a failure to study history that they would see if they went and looked at the generations of businesses that came before them; that the kind of conversations I’m proposing right now have generally been super-healthy and have produced better economic outcomes for shareholders.

The Arab Spring

I’m curious, as an observer and as someone who helped build this company and advise the company early on, how do you view the Arab Spring? How did you view the Arab Spring in terms of Facebook and the promise of it?

When the Arab Spring happened, it was a form of validation that energized me intensely. I am inherently optimistic, and really I want to believe the best in people, and I want to be Pollyanna. I want to believe that tomorrow will be better than today, and the Arab Spring had this optimism built into it that was – and this sense that Facebook could be a tool used by the powerless to balance the scales against the powerful. I remember having a similar experience of Wikipedia during the Virginia Tech shootings when I was working with the Wikipedia team at that time, and it validated their whole thing, because they actually beat the mainstream media to the story because people posted on Wikipedia.

The notion that internet platforms could make society better seemed like an inevitable outcome once the Arab Spring happened. With the passage of time, we’ve learned something very different, that these are tools that in the hands of autocrats and despots have extraordinary power to do harm to the innocent, and Facebook unfortunately has become wrapped up in that, that exact problem. And the promise of the Arab Spring has given way to disappointment and frankly just horrific abuses of the platform by bad actors. And, you know, I did not see that coming at all. I wanted to believe the best, and so I was blind to the worst until it was too late.

What I love is that you described this [Alfred] Hitchcock scenario. Tell me about that.

In January of 2016, my wife and I went on vacation, and every day I’m on the internet because where we are, we have really great internet connection, and I’m checking Facebook, and I start to notice memes, images with text on them ostensibly from groups associated with the Bernie Sanders campaign but with content that was deeply, offensively misogynistic, and it’s being shared by friends of mine. Initially it’s just a trickle, but within a matter of days, these things are spreading virally, and the groups are all – they all have relatively innocuous but official-sounding names. And I’m thinking, there’s no way the Sanders campaign is OK with this, right? And yet it appears there is money supporting these ideas. I make note, and I think to myself, I run a band on Facebook, so I’ve done a lot of Facebook advertising, and I know how hard it is to get things to spread virally; it’s really important to start with an audience, and how did they get people into these groups in the first place?

In March I see a news report that Facebook has expelled a group that was using its application programming interface to gather data about people who expressed an interest in Black Lives Matter. They then sold that data to police departments. To me that was a clear violation of the right of privacy of the Fourth Amendment, and I think to myself, “wow, bad actors are doing bad things on the platform.” And Facebook would say, “Well, we expelled them,” and I would go, “Yeah, but, irreparable harm had already been done.”

…In June the United Kingdom has a vote to decide whether or not to leave the European Union. I would not normally pay that close attention to U.K. politics, but this one produced such a shocking outcome that I had to pay attention. The expectation immediately before the vote was that the Remain campaign, the ones choosing to stay in the European Union, would win by about four points, maybe a little bit more. When the vote came in, Leave had won by four or five points, and that was outside the range of possible outcomes that people were expecting.

The thing that was really obvious was the difference between the two campaigns. Remain had what I would describe as a really neutral, emotionally neutral campaign, arguing that the U.K. had the best of all possible worlds. They had all the benefits of membership in the European Union, but they got to have their own currency, which really was a deal you could never top. The Leave campaign argued something emotional, really intensely emotional, and there were two pieces of it. The first was that they argued that by leaving the European Union, they could stop immigration into the United Kingdom, and effectively they blamed everything that was wrong in the country on immigrants.

It was certainly a xenophobic argument, and for many people, there was an element of racism, too. But they did something really clever. They paired that argument with the notion that there would be huge savings from leaving the European Union and that all of that would be piled into the national health system and would improve health care in the United Kingdom, which essentially allowed the people who are xenophobic or racist to feel really good about their vote. The Leave campaign had been very successful at spreading this message over Facebook because it benefitted from the fact that users on Facebook share things that either make them afraid or angry, and the Leave campaign hit those buttons really effectively. Meanwhile, the Remain campaign is arguing stay the course, right, no emotion at all.

I get hit by a bolt of lightning, that it’s possible that there is something about the way Facebook works that creates a structural advantage for emotional political campaigns, particularly negatively emotional political campaigns, against neutral or positive ones, and that – I found that incredibly disturbing.

Within essentially a few weeks, we start to get the first news about the Russians having hacked the Democratic National Committee and Democratic Congressional Campaign Committee, and the whole Russian story picks up momentum into August, when we find out that Trump’s campaign manager has deep business ties in Russia. And I have this nagging feeling: What if the Russians played some role in this? I mean, I have no basis for believing that. OK, it’s just nagging me. Then in August we get the report from Housing and Urban Development that they are sanctioning Facebook for having tools that essentially enable people to discriminate in violation of the Fair Housing Act.

At this point, to have a series of different examples in unrelated areas, that suggest to me that there is something wrong systemically with the Facebook algorithms and business model that is allowing bad actors to harm innocent people.

I reached out at that time to Kara Swisher and Walt Mossberg at Recode, which is a technology blog that – it’s really the best in that business. They were good friends of mine, and I thought, well, I would just check with them: Are they see something similar? Does it bother them? Are they looking at it? Initially I didn’t hear from them. Sometime in September I got an email back from Walt Mossberg who said, “I don’t think we’re going to cover this as a story, but I think this issue that you’re raising is important, and we ought to know about it, and you can get the conversation started by writing an op-ed for us.” So I set out to start writing an op-ed in late September, and I didn’t feel any rush. I mean, my sense was that Clinton was going to win the election, that whatever was going on wasn’t going to affect the election, but that there was a systemic problem that would be persistent and that we need to deal with it. But I could take my time, and really importantly, I didn’t want to focus on election stuff, because if Clinton won, I didn’t want Facebook to say: “See? It didn’t affect the election. So these aren’t really issues.”

I wrote the op-ed, and it took me basically a month to write it. … I was preparing to publish this, and my wife pointed out, “Hey, don’t you think you want to send it to Mark and Sheryl first?,” and I go, ”Duh.” That was such a really good idea, because these are my friends.

The goal wasn’t to publish an op-ed; the goal was to help them fix the problem. And I knew because Recode hadn’t wanted to pursue the story, it wasn’t that big a deal to them anyway. I mean, what I was really trying to do was to help Mark and Sheryl get this thing right, so what I did was instead of publishing, I sent it to them, and they responded right away.

And their responses were more or less what I expected, which is to say that what I had seen were isolated problems and that they had addressed each and every one of them. They then also said: “Listen, you’re important to us, and we want you to know that we are taking your ideas seriously, so we’re going to hand you off to one of our senior people, and we’d like you to dig into the issue. If there’s something for us to follow up on, we will.”

So they make a handoff. They hand me off to Dan Rose, who is one of the most senior, longest-serving executives at Facebook. He was exactly the right person for this because I had a great relationship with him. I’ve known him a long time. I trusted him. He’s a great listener; he’s really thoughtful, and he’s a problem solver. So I got on the phone with Dan, and his initial reaction was the same as Sheryl and Mark: I was looking at an isolated set of problems that they’d addressed. But he threw one more piece into the conversation, which is, “Hey, you know that we’re a platform, not a media company.” He’s essentially referring to Section 230 of the Communications Decency Act, which gives them a safe harbor against third-party actions. And he’s going, “We’re not responsible for what third parties do on the platform.”

We spoke, I don’t know, two or three times over the next week. Then the election happens, and all of a sudden, my whole psychology changed. I went from having a reasonable conversation with my friend to being just like, “Oh, my God, what has happened here?,” and, you know, “You guys have almost certainly played a role in this.” And bless his heart, Dan listened to me patiently, and he let me vent. We had really thoughtful conversations, and my key addition to the conversation was to say: “Dan, at that point, you had 1.7 billion members. If they conclude you’re responsible for what happens by a third party, it won’t matter what the law says. You guys are. You’re risking your brand for nothing. What’s the upside from taking that position you’re taking? You have this opportunity where nobody is going to hold you responsible. They may hold you responsible, but they’re not going to blame you for what’s happened. You didn’t do this on purpose.”

My thought was that they could be like Johnson & Johnson was in the ’80s when somebody tampered with bottles of Tylenol, put poison in them. Johnson & Johnson’s reaction wasn’t just to take the Tylenol off the shelves of those particular stores; they took it off everywhere, and they kept it out of the market until they could create tamperproof packaging. The great insight that they had was that when your users are harmed by the action of a third party, that the right thing to do is to leap to the defense of your users and to accept whatever short-term financial penalty, because if you do that, at least in case of Johnson & Johnson, Tylenol users rewarded them. … For doing the right thing, users basically said, “We trust you more than we did before,” and in the long run it worked out to their benefit.

I thought Facebook could do something like that; that they had not been the ones who caused these things to happen, but they could stand up and say: “We’re going to protect our users. We’re going to do the right thing, and we’re going to start out by helping everybody understand what happened on each of these things going on. We’re going to reassess our priorities. We’re going to reassess the metrics on which we run the company to take into account the fact that our impact is so much greater now than it used to be, and that as Facebook, as a company with billions of users, we have influence on how the whole social fabric works that no one’s had before.”

You described the experience in a kind of Hitchcock terms of watching all these things happening. Can you tell me how it felt?

Keep in mind, it’s after the election, and I’m so unhappy about this. And at the same time, Mark Zuckerberg is at a conference telling everybody it’s crazy to think that they might have affected the election. Now, I’m trying to imagine how this feels to Dan [Rose]. I mean, he’s got this guy on the other end of the line who’s really, really concerned about this thing, and he kept his cool. I really admire him for the way he handled it, and he invited me to continue to interact, and we talked several days in a row. And then he had a really good idea. He said: “Look, I understand you think this is systemic. We think these are isolated examples. If you find more examples that point to the same problem, maybe that will change the way we look at this. So why spend the next month just rooting around the internet looking for examples of cases where bad actors have used Facebook to cause harm?”…

Warning Facebook

Just going back to one specific, what did your note to Mark and Sheryl say? What did you say to them?

Keep in mind, when I wrote this op-ed, I was writing an opinion piece for a tech blog, so it began with a simple statement of “You know, I’ve been an investor; I’ve been incredibly private. But now I’m afraid and embarrassed by things that I have seen here.” Then it’s about 1,000 words, a little bit more, and it goes through an articulation of the problem, which is that the business model and algorithms at Facebook can be used by bad actors to harm innocent people. I give them the examples, I describe why I’m worried about them, and I talk about the fact that they have a responsibility that is greater than the one that they’ve internalized. And that the brand, the health and well-being of the company are all dependent on them making good choices. And it was, you know. It would have been better to start with just a note to them, not sending them the op-ed, but I wasn’t clever enough to figure that out.

But I never published the op-ed. I felt it was really important once I started down that path to give that conversation all the time it needed. These are my friends. I thought that eventually I’d be able to convince them to take some action. So the op-ed remains safely on my hard drive and has never been published. That was the right answer. I mean, my goal then and my goal now is to get them to do the right thing. …There’s nothing more going on. This is about protecting democracy. This is about protecting the health of users. This is about protecting the economy. And Facebook has the power to do all of those things without harming itself, but it needs to make changes the way it’s approaching the problem, that what we see them doing is resisting at every step any kind of input from the outside.

It’s almost as though they’re telling us: “We’ve created this incredible thing; we’ve given the world this gift. You have no right to tell us what to do with it.”

And the idea that you’re kind of watching all of these things happening. I mean it’s a nightmare scenario of watching a horror film or something on the loose. Can you just bring me into that sense of how you felt in 2016?

Keep in mind I’m an analyst by training and profession and so my job is to watch and interpret. And the reality is, I’ve had success with it but it’s not uniform right. I’ve got many things wrong along the way but I’ve learned to trust my instincts on some issues. And in this particular case I felt like Jimmy Stewart in an Alfred Hitchcock movie, where I saw something during the New Hampshire primary that I didn’t understand, but it really bothered me. And I pulled on this thread a little bit and I find these other things going on. And I don’t really understand what’s going on. And all the way through the election, it’s almost like the first reel of that movie where I’m getting in to a problem I don’t understand with no way out, because I don’t know which way is up.

…In the time since then, during 2017 we got into the second reel of the movie. And I began to develop a better understanding of what was going on. But when I look back on this I think to myself: “I’m really glad that I took seriously those things that I saw.” I’m incredibly angry at myself for not having understood sooner that the failure mode of Facebook around its business model and algorithms relative to the public health of its users, relative to democracy, relative to the whole entrepreneurial economy – that I hadn’t seen those soon enough to do something about it. When the cost was a lot lower than it turned out to be. …  What was it about Facebook that you were awakening to throughout this 2016 period? This thing that you’d help to birth to some degree, what had it become? As an investor, it never occurred to me that Facebook would ever be a weapon, that people would do harm with it. I had no illusions about how the profit motive affects people’s priorities. Given the scale, it was inevitable some things would go wrong, you know in isolation. What never occurred to me, and which I really struggle with every single day, is that conscious choices and priorities of the company put in motion forces that have undermined democracy, have undermined our economy, have undermined the public health of our citizens, and have actually done the same in countries around the world. It never occurred to me that anything I was involved in would ever – I’ve been really careful about my choices. I’ve consciously turned down companies that I knew would be financially successful because I struggled with their value system. I did that with Uber; I did that with Spotify; I did that with Zynga, knowing they were going to be successful but feeling like the value systems of those companies, they might be fine for a lot of people, but they weren’t fine for me.

And I never had that sense about Facebook. I was never worried that they were going to go across the line, and I punish myself every day over that issue now, because obviously, if the harm is great enough, it doesn’t matter that you didn’t intend it.

originally posted on pbs.org