Is Your Marketing AI Failing You? May Be You Are Not Asking The Right Questions

Is Your Marketing AI Is Failing You? May Be You Are Not Asking The Right Questions
Is Your Marketing AI Is Failing You? May Be You Are Not Asking The Right Questions

Fewer than 40% of companies that invest in AI see gains from it, usually because of one or more of these errors: (1) They don’t ask the right question, and end up directing AI to solve the wrong problem. (2) They don’t recognize the differences between the value of being right and the costs of being wrong, and assume all prediction mistakes are equivalent. (3) They don’t leverage AI’s ability to make far more frequent and granular decisions, and keep following their old practices. If marketers and data science teams communicate better and take steps to avoid these pitfalls, they’ll get much higher returns on their AI efforts.

When a large telecom company’s marketers set out to reduce customer churn, they decided to use artificial intelligence to determine which customers were most likely to defect. Armed with the AI’s predictions, they bombarded the at-risk customers with promotions enticing them to stay. Yet many left despite the retention campaign. Why? The managers had made a fundamental error: They had asked the algorithm the wrong question. While the AI’s predictions were good, they didn’t address the real problem the managers were trying to solve.

That kind of scenario is all too common among companies using AI to inform business decisions. In a 2019 survey of 2,500 executives conducted by Sloan Management Review and the Boston Consulting Group, 90% of respondents said that their companies had invested in AI, but fewer than 40% of them had seen business gains from it in the previous three years.

In our academic, consulting, and nonexecutive director roles, we have studied and advised more than 50 companies, examining the main challenges they face as they seek to leverage AI in their marketing. This work has allowed us to identify and categorize the errors marketers most frequently make with AI and develop a framework for preventing them.

Let’s look at the errors first.

Alignment: Failure To Ask The Right Question

The real concern of the managers at our telecom firm should not have been identifying potential defectors; it should have been figuring out how to use marketing dollars to reduce churn. Rather than asking the AI who was most likely to leave, they should have asked who could best be persuaded to stay – in other words, which customers considering jumping ship would be most likely to respond to a promotion. Just as politicians direct their efforts at swing voters, managers should target actions toward swing customers. By giving the AI the wrong objective, the telecom marketers squandered their money on swaths of customers who were going to defect anyway and underinvested in customers they should have doubled down on.

In a similar case, marketing managers at a gaming company wanted to encourage users to spend more money while they were playing its game. The marketers asked the data science team to figure out what new features would most increase users’ engagement. The team used algorithms to tease out the relationship between possible features and the amount of time customers spent playing, ultimately predicting that offering prizes and making the public ranking of users’ positions more prominent would keep people in the game longer. The company made adjustments accordingly, but new revenues didn’t follow. Why not? Because managers, again, had asked the AI the wrong question: how to increase players’ engagement rather than how to increase their in-game spending. Because most users didn’t spend money inside the game, the strategy fell flat.

At both companies, marketing managers failed to think carefully about the business problem being addressed and the prediction needed to inform the best decision. AI would have been extremely valuable if it had predicted which telecom customers would be most persuadable and which game features would increase players’ spending.

Asymmetry: Failure To Recognize The Difference Between The Value Of Being Right And The Costs Of Being Wrong

AI’s predictions should be as accurate as possible, shouldn’t they? Not necessarily. A bad forecast can be extremely expensive in some cases but less so in others; likewise, superprecise forecasts create more value in some situations than in others. Marketers – and, even more critically, the data science teams they rely on – often overlook this.

Consider the consumer goods company whose data scientists proudly announced that they’d increased the accuracy of a new sales-volume forecasting system, reducing the error rate from 25% to 17%. Unfortunately, in improving the system’s overall accuracy, they increased its precision with low-margin products while reducing its accuracy with high-margin products. Because the cost of underestimating demand for the high-margin offerings substantially outweighed the value of correctly forecasting demand for the low-margin ones, profits fell when the company implemented the new, “more accurate” system.

It’s important to recognize that AI’s predictions can be wrong in different ways. In addition to over- or underestimating results, they can give false positives (for instance, identifying customers who actually stay as probable defectors) or false negatives (identifying customers who subsequently leave as unlikely defectors). The marketer’s job is to analyze the relative cost of these types of errors, which can be very different. But this issue is often ignored by, or not even communicated to, the data science teams that build prediction models, who then assume all errors are equally important, leading to expensive mistakes.

Aggregation: Failure To Leverage Granular Predictions

Firms generate torrents of customer and operational data, which standard AI tools can use to make detailed, high-frequency predictions. But many marketers don’t exploit that capability and keep operating according to their old decision-making models. Take the hotel chain whose managers meet weekly to adjust prices at the location level despite having AI that can update demand forecasts for different room types on an hourly basis. Their decision-making process remains a relic of an antiquated booking system.

Another major impediment is managers’ failure to get the granularity and frequency of their decisions right. In addition to reviewing the pace of their decision-making, they should ask whether decisions based on aggregate-level predictions should draw on more finely tuned predictions. Consider a marketing team deciding how to allocate its ad dollars on keyword searches on Google and Amazon. The data science team’s current AI can predict the lifetime value of customers acquired through those channels. However, the marketers might get a higher return on ad dollars by using more-granular predictions about customer lifetime value per keyword per channel.

Communication Breakdowns

In addition to constantly guarding against the types of errors we’ve described, marketing managers have to do a better job of communicating and collaborating with their data science teams – and being clear about the business problems they’re seeking to solve. That isn’t rocket science, but we often see marketing managers fall short on it.

Several things get in the way of productive collaboration. Some managers plunge into AI initiatives without fully understanding the technology’s capabilities and limitations. They may have unrealistic expectations and so pursue projects AI can’t deliver on, or they underestimate how much value AI could provide, so their projects lack ambition. Either situation can happen when senior managers are reluctant to reveal their lack of understanding of AI technologies.

When defining the problem, managers should get down to what we call the atomic level – the most granular level at which it’s possible to make a decision.

Data science teams are also complicit in the communication breakdown. Often, data scientists gravitate toward projects with familiar prediction requirements, whether or not they are what marketing needs. Without guidance from marketers about how to provide value, data teams will often remain in their comfort zone. And while marketing managers may be reluctant to ask questions (and reveal their ignorance), data scientists often struggle to explain to nontechnical managers what they can and can’t do.

We’ve developed a three-part framework that will help open lines of communication between the marketing and data science teams. The framework, which we’ve applied at several companies, lets teams combine their respective expertise and create a feedback loop between AI predictions and the business decisions they’re meant to inform.

The Framework In Practice

To bring the framework to life, let’s return to the telecom company.

What Is The Marketing Problem We Are Trying To Solve?

The answer to this question has to be meaningful and precise. For example, “How do we reduce churn?” is far too broad to be of any help to the developers of an AI system. “How can we best allocate our budget for retention promotions to reduce churn?” is better but still too vague. (Has the retention budget been set, or is that something we need to decide? What do we mean by “allocate”? Are we allocating across different retention campaigns?) Finally, we get to a clearer statement of the problem, such as: “Given a budget of $x million, which customers should we target with a retention campaign?” (Yes, this question could be refined even further, but you get the point.) Note that “How do we predict churn?” doesn’t appear anywhere – churn prediction is not the marketing problem.

When defining the problem, managers should get down to what we call the atomic level – the most granular level at which it’s possible to make a decision or undertake an intervention. In this case the decision is whether or not to send each customer a retention promotion.

As part of the discovery process, it’s instructive to document exactly how decisions are made today. For example, the telecom company uses AI to rank customers (in descending order) by their risk of churning in the next month. It targets customers by starting at the top of that ranking and moves down it until the budget allocated to the retention campaign runs out. While this step seems merely descriptive and doesn’t reveal how the problem might be reframed, we have seen many cases where it is the first time the data science team actually gets to understand how its predictions are used.

It’s important at this stage for the marketing team to be open to iterating to get to a well-defined problem, one that captures the full impact of the decision on the P&L, recognizes any trade-offs, and spells out what a meaningful improvement might look like. In our experience, senior executives usually have a good sense of the problem at hand but have not always precisely defined it or clearly articulated to the rest of the team how AI will help solve it.

Is There Any Waste Or Missed Opportunity In Our Current Approach?

Marketers often recognize that their campaigns are disappointments, but they fail to dig deeper. At other times managers are unsure about whether the results can be improved. They need to step back and identify the waste and missed opportunities in the way a decision is currently made.

For instance, most airlines and hotels track measures of spill and spoil: Spoil measures empty seats or rooms (often the result of pricing too high), spill measures “lost trading days” on which flights or hotels filled too quickly (the result of pricing too low). Spill and spoil are beautiful measures of missed opportunity because they tell a very different story from aggregated measures of occupancy and average spend. To make the most of their AI investments, marketing leaders need to identify their spill and spoil equivalents – not in the aggregate but at the atomic level.

The first step is to reflect on what constitutes success and failure. At the telecom firm, the knee-jerk definition of success was “Did the targeted customers renew their contracts?” But that’s too simplistic and inaccurate; such customers might have renewed without receiving any promotion, which would make the promotion a waste of retention dollars. Similarly, is it a success when a customer who was not targeted by a promotion does defect? Not necessarily. If that customer was going to leave anyway, not targeting her was indeed a success, because she wasn’t persuadable. However, if the customer would have stayed if she’d received the promotion, an opportunity was missed. So what would constitute success at the atomic level? Targeting only customers with high churn risk who were persuadable and not targeting those who were not.

Once the sources of waste and missed opportunities are identified, the next step is to quantify them with the help of data. This can be easy or very hard. If the data team can quickly determine what was a success or failure at the atomic level by looking at the data, great! The team can then look at the distribution of success versus failure to quantify waste and missed opportunities.

There are cases, however, where it is difficult to identify failures at the atomic level. At the telecom firm, the data team wasn’t examining which customers were persuadable, and that made it hard to classify failures. In such circumstances teams can quantify waste and missed opportunities using more-aggregated data, even if the results are less precise. One approach for the telecom firm would be to look at the cost of the promotion incentive relative to the incremental lifetime value of the customers who received it. Similarly, for the customers not contacted by the promotion, the team might look at the lost profit associated with the nonrenewal of their contracts.

Such tactics helped the telecom company identify customers who were being retained but at a cost greater than their incremental future value, high-value customers who had defected despite receiving retention promotions, and high-value customers who had not been targeted and left after the campaign. This quantification was possible because the data science team had a control group of customers – who had been left alone to set the baseline – to compare results against.

What Is Causing The Waste And Missed Opportunities?

This question is usually the hardest, because it requires reexamining implicit assumptions about the firm’s current approach. To find the answer the firm must explore its data and get its subject matter experts and data scientists to collaborate. The focus should be on solving the alignment, asymmetry, and aggregation problems we identified earlier.

Addressing alignment. The goal here is to map the connections between AI predictions, decisions, and business outcomes. That requires thinking about hypothetical scenarios. We recommend that teams answer the following questions:

  1. In an ideal world, what knowledge would you have that would fully eliminate waste and missed opportunities? Is your current prediction a good proxy for that?

If the telecom team members had answered the first question, they would have realized that if their AI predicted perfectly who could be won over by the retention offer (rather than who was about to leave), they could eliminate both waste (because they wouldn’t bother making offers to unpersuadable customers) and missed opportunities (because they’d reach every customer who was persuadable). While it is impossible to make perfect predictions in the real world, focusing on persuadability would still have led to great improvements.

After the ideal information is identified, the question becomes whether the data science team can make the required predictions with sufficient accuracy. It’s crucial that the marketing and data science teams answer this together; marketers often don’t know what can be done. Similarly, it is difficult for the data scientists to link their predictions to decisions if they don’t have subject matter expertise.

  • Does the output of your AI fully align with the business objective?

Remember the gaming company that used AI to identify features that would increase user engagement? Imagine the gains if the company had created AI that predicted user profitability instead.

A common mistake here is falsely believing that a correlation between the prediction and the business objective is enough. This thinking is flawed because correlation is not causation, so you might predict changes in something that correlates with profitability but does not in fact improve it. And even when there is causation, it may not map 100% to the objective, so your effort may not fully achieve your final outcome, leading to missed opportunities.

At the telecom company, asking this third question might lead the team to think not only about persuadable users but also about the increase or decrease in their profitability. A persuadable user with low expected profitability should have a lower priority than a persuadable user with high expected profitability.

Addressing Asymmetry. Once you have a clear map that links the AI prediction with the decision and the business outcome, you need to quantify the potential costs of errors in the system. That entails asking, How much are we deviating from the business results we want, given that the AI’s output isn’t completely accurate?

At the telecom company, the cost of sending a retention promotion to a nonpersuadable customer (waste) is lower than the cost of losing a high-value customer who could have been persuaded by the offer (missed opportunity). Therefore, the company will be more profitable if its AI system focuses on not missing persuadable customers, even if that increases the risk of falsely identifying some customers as being receptive to the retention offer.

The difference between waste and missed opportunity sometimes is difficult to quantify. Nevertheless, even an approximation of the asymmetric cost is worth calculating. Otherwise, decisions may be made based on AI predictions that are accurate on some measures but inaccurate on outcomes with a disproportionate impact on the business objective.

Addressing Aggregation. Most marketing AI doesn’t make new decisions; it addresses old ones such as segmentation, targeting, and budget allocation. What’s new is that decisions are based on richer amounts of information that are collected and processed by the AI. The risk here is that humans are, by and large, reluctant to change. Many managers haven’t yet adjusted to the frequency and level of detail at which the new technology can make old decisions. But why should they keep making those decisions at the same pace? With the exact same constraints? As we saw earlier, this sometimes results in failure.

The way to solve this problem is by conducting two analyses. In the first, the team should examine how it could eliminate waste and missed opportunities through other marketing actions that might result from the predictions generated. The intervention that the team at the telecom firm considered was a retention discount. What if the team incorporated other incentives in the decision? Could it predict who would be receptive to those incentives? Could it use AI to tell which incentive would work best with each type of customer?

The second type of analysis should quantify the potential gains of making AI predictions more frequently or more granular or both. At one retailer, for instance, the data science team had developed AI that could make daily predictions of responses to marketing actions at the individual-customer level, yet the chain’s marketing team was making decisions on a weekly basis across 16 customer segments. While changing the way the decisions were made would obviously incur costs, would the retailer find that the benefits outweighed them?

Marketing needs AI. But AI needs marketing thinking to realize its full potential. This requires the marketing and data science teams to have a constant dialogue so that they can understand how to move from a theoretical solution to something that can be implemented.

The framework we’ve presented here has proven to be useful for getting the two groups to work together and boost the payoffs from AI investments. The approach we’ve described should create opportunities to better align AI predictions with desired enterprise outcomes, recognize the asymmetric costs of poor predictions, and change the decisions’ scope by allowing the team to rethink the frequency and granularity of actions.

As marketers and data scientists use this framework, they must establish an environment that allows a transparent review of performance and regular iterations on approach – always recognizing that the objective is not perfection but ongoing improvement.

originally posted on hbr.org by Eva Ascarza, Michael Ross, and Bruce G.S. Hardie

About Authors:
Eva Ascarza is the Jakurski Family Associate Professor of Business Administration at Harvard Business School.

Michael Ross is a cofounder of DynamicAction, which provides cloud-based data analytics to retail companies, and an executive fellow at London Business School.

Bruce G.S. Hardie is a professor of marketing at London Business School.