How To Avoid Exploiting Customers’ Behavioral Biases: Make Sure Your Digital Design Functions Are In The Best Interests Of Users

How To Avoid Exploiting Customers’ Behavioral Biases: Make Sure Your Digital Design Functions Are In The Best Interests Of Users
How To Avoid Exploiting Customers’ Behavioral Biases: Make Sure Your Digital Design Functions Are In The Best Interests Of Users

Executives must play a proactive role in making sure their digital design functions in the best interests of users. Doing so has the potential to give companies a deeper and more positive relationship with its customers. Brands that design their sites to exploit consumer behavioral bias (or those that fail to recognize that they do so unknowingly) might benefit in the short term, but the damage done to their reputation will do lasting harm. To determine how your brand might be exploiting customers online, ask yourself the following questions: 1) Are you being transparent in user agreements? 2) Do you make cancelling your services easy? 3) Are your default options the best options for customers? 4) Do you frame choices in a misleading way? and 5) Is your product addictive?

When tech companies exploit customers, they risk undermining trust in the entire digital ecosystem. One common form of exploitation occurs when digital platforms are designed to take advantage of customers’ behavioral biases. Managers need to understand these biases and make a good faith effort to earn customer trust with ethical platform design choices.

Policymakers in Europe and the UK have studied how the design and architecture of online interactions impacts consumers and markets, and they have already issued regulations to protect consumers. More recently, the U.S. FTC has begun to study the issue as well but currently there are few constraints on firm behavior. When consumers are likely to be harmed, well-designed regulation can protect consumers as well as the firms that abide by the rules.

But companies must also play a proactive role in this process. There are concrete steps managers can take to support their consumers and to deliver more trustworthy and customer-friendly technology. To see how customer exploitation has come to be so prevalent, and how to avoid engaging in it, it’s helpful to understand the economics literature that has investigated one form of customer exploitation: the systematic exploitation of behavioral biases.

Twentieth century economic analyses often assumed that people were unrealistically sophisticated in their day-to-day decisions – using all information available, unaffected by the way an issue is “framed,” and virtually never making mistakes. Over the past several decades, a growing body of research has explored the ways in which decision making systematically differs from this assumption. This work has documented a variety of systematic behavioral biases, such as the tendency to notice only the most salient information, to be influenced by small changes to the framing of choices, to stick with whatever default a company sets even if there are better options, and to over-discount the future in favor of immediate benefit.

Companies face an ethical and strategic choice when serving consumers with behavioral biases. They can try to debias their customers. They can work around – or accommodate – customer biases, while trying to act in the best interest of customers. Or, they can exploit behavioral biases for short-term profit. At the risk of stating the obvious, companies should work to avoid exploiting customers’ biases. Here, we provide an overview of several biases that are commonly present in online contexts.

Bias 1: Customer Inattention

Consumers cannot give their full attention and cognition to all pieces of information about every product and service they buy. Therefore, they react most strongly to the information that is more salient. Importantly, this saliency may be chosen and created by the firm itself. Research from Mike in collaboration with Ginger Zhe Jin and Daniel Martin suggests that customers do not fully account for the strategic component of such disclosure decisions – leading to costly mistakes and consumer harm. Platforms often manipulate consumer attention, from the order in which search results are shown, to the extent to which paid content is placed above organic content, among other techniques.

For example, StubHub shrouds fees on tickets, making it hard for customers to compute and compare the total costs they should expect to pay (which we examine in more detail later). Google uses much of its main search page to draw attention to paid ads and its own content, crowding out the organic search results that are potentially most useful to customers.

Bias 2: Reference Dependence And Framing Effects

How product choices are framed, and which reference points are provided, can also impact how consumers make selections. For example, consider an example from Dan Ariely’s 2008 book Predictably Irrational: A magazine offers three subscriptions – one for its digital content for $59, one for its print subscription for $125, and one for both its print and digital content for $125. Why would anyone buy only the print subscription when they can get print and digital for the same price? They wouldn’t, and that’s precisely the point. The middle option is likely present only to make the $125 print and digital option seem like a better deal – known as a decoy effect.

Decoy pricing is only one form of framing that can be exploitative. Consider the dire warnings often presented to customers to get them to buy purchase insurance: “Buy now before it’s too late.” “If you fail to protect your device, you will be responsible for all damage.” “Seventy-percent of people have protected their device. Will you?” These messages are often designed to leverage insights from behavioral economics to nudge customers to make a purchase, at times with little regard for whether the purchase is ultimately good for the customer.

Bias 3: Default Bias

People tend to select the default option, even if better options are available, and even if the choice will have a major effect on their lives. As mentioned above, this can in principle be used in ways that will help people. For example, economists James Choi, David Laibson, Brigitte Madrian, and Andrew Metrick set out to understand the impact of defaults on 401k enrollment, and found that changes as simple as setting the default to “enroll” versus “not enroll” significantly increases the chance of enrollment. (They found similar results when they changed the default contribution rate as well.) But there are plenty of examples where autoenrollment and strategic defaults can be set in ways that leave users worse off, so these decisions need to be made carefully.

Bias 4: Addiction And Present Bias

Recent research estimates that nearly a third of social media consumption is driven by addiction. More broadly, digital addiction may lead people to stay online longer than they planned or wish to. This can be exacerbated by business models and the ways in which companies use algorithms and analytics. Tech companies regularly measure user engagement as an outcome and as a success metric. This can lead to outcomes where engagement increases but where customer wellbeing may suffer. As a user’s outside options become more pressing (sleep, homework, loading the dishwasher) a platform might even use stronger measures to keep the user engaged.

Five Questions Your Brand Must Consider

By more carefully examining the risk of harm and the behavioral foundations of decision making, managers can more proactively and ethically apply the use of algorithms and experimentation. This can help brands apply guardrails, and invest in making sure to measure outcomes, so that they can avoid unwittingly exploiting behavioral biases.

When making design choices on a platform, managers should step back from short-term and narrow metrics like conversions and think through the broader questions about the value they create for their stakeholders. Here, we examine these questions and provide specific and actionable recommendations for managers.

1. Are You Transparent About Prices And Fees?

In 2015, StubHub set out to brand itself as a transparent ticket seller. Marketing materials promised: “No surprise fees at checkout.” When searching for tickets, users would see the prices, including all fees, up front to make it easier to search for tickets and know exactly what they’ll pay. However, the transparent approach turned out to be short lived.

StubHub ran an experiment in which they compared their transparent system to a system in which they showed the base ticket price on the main search page, and the full price only at checkout. StubHub users who weren’t shown fees until checkout spent about 21% more on tickets and were 14% more likely to complete a purchase compared with those who saw the total cost of a ticket upfront. The company dropped its transparent approach and went back to shrouded fees.

While this may have been profitable in the short run, it comes with risks, including that customers will eventually tire of the lack of transparency and seek to purchase elsewhere and that new regulation will be imposed to limit how much a company can shroud its prices.

2. Do You Make It Easy To Cancel Your Service?

Signing up for services online is typically easy, but cancelling is often quite difficult. Firms create barriers and hassle by requiring consumers to take many steps to end the relationship. They use framing and persuasion to try to prevent cancellation: “Are you sure you want to cancel?” “You could pause instead.” “Remember all the benefits of membership!”

Adding and subtracting services would likely be more symmetric in a more consumer-friendly market. For example, if the service can be bought online, a standard cancellation should not require the consumer to mail in a paper letter. If the service can be initiated with one click, it should generally be just as easy to cancel.

3. Do You Use Default Settings In A Way That Is Genuinely Helpful For Customers?

When you decide where to pre-set a default option for customers, ask yourself the following questions: Am I helping users make good decisions through this setting? Are my consumers actively choosing everything they pay for? Are defaults set so that no consumer must “opt out” to avoid purchasing?

Companies should research customer preferences to implement ethical defaults. If research shows that most people choose the second highest price tier for a set of products or services, this might be a reasonable default choice. In general, the default should not be set to cause the consumer to purchase something that is optional and that she did not actively choose. This is called “opt-out” design and it captures consumers who are not paying attention or make an error. Purchase expenditures should generally be “opt-in” for consumers.

4. Do You Frame Choices In A Misleading Way?

To maintain long-term customer satisfaction, options should be presented in a way that allows customers to make decisions in their own best interest. When framing a decision, ask yourself: What decisions would the company’s customers make if they were fully informed and had plenty of time to think through the decision?

Customers don’t always buy a cheaper product if they spend more time considering their options. For example: When the Massachusetts Health Connector switched from showing all plan choices on one screen to requiring consumers to first choose a generosity level and next choose a branded plan, more consumers chose more generous (and expensive) plans, according to research by economists Keith Ericson and Amanda Starc published in The Journal of Health Economics.

5. Do You Create Content That Is Addictive?

Managers should consider the habits they lead customers to form. Research has found, for instance, that teenagers who spend more time online and less time with friends tend to be less happy – suggesting that elements of online activity may be addictive. Among other corners of the tech sector, this is relevant in social media: A new field experiment by economists Hunt Alcott, Luca Braghieri, Sarah Eichmeyer, and Matt Gentzkow seeks to understand the extent to which social media is addictive, experimentally varying the amount of time that people spend on social media by paying some people to stay offline. The team estimates that nearly a third of time spent on social media is a result of addiction. Digital addiction has important implications for managers, for consumer protection, and even for antitrust enforcement – as shown by Fiona’s research with psychiatrist James Niels Rosenquist and law professor Samuel Weinstein.

Spending less time online can lead to increased happiness. But an ad-supported revenue model, such as YouTube’s, changes the incentives of managers at every level to create addictive products that foster constant, uninterrupted engagement. YouTube’s decision to automatically start a new video immediately after the previous one ends is an example of a brand choosing engagement that may be sacrificing the well-being of customers. If you dig around on YouTube support, it states “If you’re on YouTube on your computer, Autoplay is switched on by default.” (They say the same is true for mobile web and the YouTube app.)  In other words, YouTube requires you to opt out if you would prefer to watch one video at a time.

Companies should also think about how their business model impacts their incentives for creating an addictive product. Subscription models might create an incentive for the service to provide long-term value and engagement, with less incentive to keep users on the platform all day, every day.

Regulation Will Level The Digital Playing Field

We believe in the potential of the internet. But we also believe that the internet needs improved regulation. Like any powerful tool, it serves society best when it has appropriate safeguards. There is a reason we have created traffic lights and crash test standards and mandated drivers’ licenses, seatbelts, and airbags in automobiles. Automobiles are a powerful and useful form of transportation that delivers a much higher net benefit to society when regulated in these ways. Consumer protection plays the same role offline, in areas like finance and food. However, it is vastly under-deployed online – an issue we discuss in a recent Tobin Center policy paper as part of an ongoing digital regulation project.

We believe digital consumer protection is necessary, and regulation will certainly play a major role in making digital commerce more trustworthy. But regulation isn’t enough. Executives must play a proactive role in making sure their digital design functions in the best interests of customers. Doing so has the potential to give the company a deeper and more positive relationship with its customers, and to avoid the reputational risks associated with customer exploitation. And, importantly, when regulation arrives, the company will already be well-positioned to abide by the new standards and to help create a more trustworthy internet.

originally posted on hbr.org by Michael Luca and Fiona Scott Morton

About Authors:
Michael Luca is the Lee J. Styslinger III Associate Professor of Business Administration at Harvard Business School and a coauthor (with Max H. Bazerman) of The Power of Experiments: Decision Making in a Data-Driven World (forthcoming from MIT Press).

Fiona Scott Morton is the Theodore Nierenberg Professor of Economics at the Yale University School of Management. Her research area is the economics of competition with a focus on healthcare markets and antitrust enforcement. She served as Deputy Assistant Attorney General for Economic Analysis at the Antitrust Division of the Department of Justice under President Obama.