Health Care Industry Can Navigated By AI? Six Critical Factors And The Required Business Model

Health Care Industry Can Navigated By AI? Six Critical Factors And The Required Business Model
Health Care Industry Can Navigated By AI? Six Critical Factors And The Required Business Model

AI has the potential to significantly improve the quality and cost of health care. But as companies design new offerings, they must take into account the obstacles they will encounter in persuading customers, regulators, and payers to accept their offerings. This article identifies those obstacles and how specific kinds of business models can overcome them.

The adoption of AI in health care is being driven by an exponential growth of health data, the broad availability of computational power, and foundational advances in machine learning techniques. AI has already demonstrated the potential to create value by reducing costs, expanding access, and improving quality. But in order for AI to realize its transformative potential at scale, its proponents need business models optimized to best capture that value.

AI changes the rules of business and, as ever, there are some unique considerations in health care. In order to understand these, we studied AI across 15 sets of use cases. These span five domains of health care (patient engagement, care delivery, population health, R&D, and administration) and cover three types of functions (measure, decide, and execute). Drawing on our experience developing strategy for health care and life sciences firms and their technology vendors (Nikhil), and building an AI-based service for health insurers (Trishan), we identified six critical factors and the required business model adaptations that firms (both AI vendors and users) need to succeed in health care.

Address Customers’ Aversion To Risk

Failure in health care is costly. Users of AI solutions in health care are, therefore, more risk-averse than their counterparts in other sectors. They require more evidence before rolling out AI applications. This places burdens on product development, lengthens sales cycles, and slows adoption rates. Firms can sidestep these issues by deploying business models that share in the downside risk of their AI solutions.

In biopharma R&D, for example, the failure of drugs in clinical trials is expensive and drives up the average cost of developing new medicines. So companies are naturally wary of new approaches. Exscientia, which is the pharmatech company behind the first two AI-designed molecules submitted for human trials, addresses this by entering co-development arrangements with its pharma customers that is tie the amount it is paid to how successful its molecules turn out to be down the road. This business model means that Exscientia is taking on a significant portion of the risk and is closer to those used by traditional drug discovery firms than it is to technology business models such as Software-as-a-Service (SaaS). While Exscientia’s business model requires more initial capital than fee-based approaches, it allows the company to capture more gains when a drug succeeds.

Health systems and payers are also wary of the flood of pitches they receive from AI vendors and are reluctant to plunge full steam ahead with them. Instead, they will often start pilot projects with these vendors, which creates a dilemma: The success of AI depends on analyzing data at scale, but pilots, by definition, are sub-scale. To address this challenge and accelerate adoption, AI vendors need to address this risk aversion through their business model. At a minimum, they need to be willing to put their fees at risk to show they have at least some skin in the game, and ideally they should also be willing and able to take a financial hit if their product fails to deliver as promised. As their solution matures, however, at-risk pricing will become less necessary to close a sale, but vendors whose solutions have a proven track record should consider still using at-risk pricing in order to charge higher prices.

Piggyback On Legacy Structures Or Sidestep Them

There are many structural barriers that inhibit the adoption of new technology in health care, including a high level of regulation, significant market concentration, and vested interests in existing incentive structures. While AI could ultimately break through these barriers, many companies will benefit initially from designing their business models to fit in the current paradigm.

For instance, most care delivery in the United States today is still compensated on the basis of the volume of activity (fee for service). There are entire systems of related billing codes for hospital procedures, clinic visits, diagnostics, and labs that have been designed around assumptions of resources and costs associated with products and services provided by humans. Rather than trying to change this system, AI diagnostics companies should take the easier path of trying to get payers to set up reimbursement codes similar to those used today for human radiologists.

An alternative, of course, is to go directly to consumers. This is Apple’s approach. It has chosen to capture the value of its health AI offerings such as those that monitor arrhythmia and falls by charging a premium price for the Apple Watch. Others such as mental health chatbot Woebot market directly to consumers. We expect to see many other direct-to-consumer AI-enabled health care offerings in molecular diagnostics, remote patient monitoring, health coaching, and other areas.

Price In Or Pass On The Cost Of Obtaining And Preparing Data

Obtaining sufficient quantities of high-quality data is a major challenge in health care. That’s because such data often resides in different organizations and its quality varies.

One way to overcome this challenge is to use one side of a business model to fund the curation and preparation of data libraries. Tempus, for example, provides data integration services to academic research centers and hospitals, which gives it access to a huge high-quality library of multi-modal data (clinical, radiology, pathology) and it offers genetic testing services to generate genomic data. The other side of its business uses AI on this data to derive insights for providers to improve clinical care for specific patients and to life science companies for research purposes.

A core element of the value proposition of other companies such as Lumiata and Clarify Health is providing platforms to address the curation of data for their customers. Lumiata’s offering is based on capability packages with different levels of data and modeling support, while Clarify Health’s is packaged by use case. Both models, though, are based on effectively spreading the high cost of building AI-ready datasets among many payer, provider, and life science customers.

Some AI companies that have scored early successes have focused on narrow use-cases such as in radiology and pathology, where data is less siloed. Even in such applications, though, companies need to take into account that AI data costs are not one and done. There will be ongoing data costs to customize algorithms for different populations and customers.

Invest In Staying Ahead Of Regulatory And Public Expectations For Ethical Behavior

The use of AI is fraught with ethical considerations and associated risks. This is true in health care as well where use cases in patient engagement, care delivery, and population health are particularly prone to issues such as bias, failure to get appropriate patient consent, and violations of data privacy. AI purveyors must proactively mitigate these risks or they will face significant backlash from clinicians, patients, and policymakers.

Bias in society is reflected in historical health data and, when not corrected, can cause AI systems to make biased decisions on, for instance, who gets access to care management services or even life-saving organs for transplants. STAT found that of 161 products cleared by the U.S. Food and Drug Administration (FDA) from 2012 to 2020 just seven reported the racial makeup and just 13 reported the gender split of their study populations. This will change: The FDA is developing regulatory approaches to reduce bias and is proposing that firms monitor and periodically report on the real-world performance of their algorithms.

Consequently, firms need to ensure that the choices they make – the customers and partners they work with, the composition of their data science teams (i.e., their diversity), and the data they collect – all contribute to minimizing bias. Some companies are already making such changes. For example, Google Health, which is working on AI to revolutionize breast cancer screening by promising improved performance with an almost tenfold reduction in cost, is not only validating the algorithm’s performance in different clinical settings but is also making large investments to ensure that the algorithm performs equitably across different racial groups.

Incorporate Change Management To Counter Human Resistance

Health care is littered with examples of best practices that take many years to be adopted even after being proven superior. Even AI applications that have institutional buy-in still need to get clinicians and other frontline workers to use them, and the painful rollout of electronic health records in the United States over the last decade or so, which has made health care workers wary of new information technology, has only made this job harder. AI applications can be perceived as especially threatening because they require changes in familiar workflows, impinging on clinicians’ autonomy, and can be seen as a threat to jobs or income.

Consequently, in addition to investing in product development, data preparation, and supportive services, AI companies need to invest in change management. This includes using design thinking in the development of the product, a strong training and onboarding program, and sensitive communications (e.g., that focuses on the benefits and addresses concerns about the impacts on people’s jobs).

Include Humans In The Loop

AI is not perfect; in some situations – especially those that are complex – it will fail. In health care, where diseases are caused by interacting genetic, social, and behavioral factors, there is great complexity. So it should not be surprising that AI in health care is more likely to fail than it is in many other industries and the cost of failure – for instance, a misdiagnosis, a failed drug candidate, or a mistake in prescribing a medication – is much higher.

Therefore, it is often necessary to involve humans in the loop to accept or reject decisions made by AI. Firms building and selling AI-based systems need to factor the cost of this human expertise into their pricing. One company that has done this is AliveCor, whose direct-to-consumer electrocardiogram (EKG) device uses AI to interpret EKG readings that a consumer takes by using a relatively cheap device paired with a cell phone app. When the AI sees an “edge case” (an uncommon case that it might not have seen before) or finds an issue that requires a clinician’s input, it prompts the user to consider having a clinician take a second look – for a fee of course.

Where it is not possible to pass on this added cost of the human intervention, companies should limit the scope of the product. Buoy Health took this approach with its popular AI-based symptom checker. Its AI chatbot engages a patient and suggests likely diagnoses along with navigation to the most appropriate point of care, which could be telehealth, urgent care, the emergency room, or the patient’s primary care doctor. In each of these cases, Buoy is choosing to let others provide the costly humans in the loop, allowing it to maintain a low-cost model.

AI has enormous potential in health care. But to succeed with their offerings, companies need to tailor their business models to the characteristics of their particular offering. One size does not fit all.

originally posted on hbr.org by Trishan Panch and Nikhil Bhojwani

About Authors:
Trishan Panch, MD, is a primary care physician and cofounder of Wellframe, an AI-based digital health company. He is an instructor at the Harvard T.H. Chan School of Public Health.

Nikhil Bhojwani is a founder and managing partner at health care consulting firm Recon Strategy. He is a co-founder of the Assurance Testing Alliance and an advisor to CIC Health, which operates a national-scale online marketplace for Covid-19 testing. He serves on the advisory committee of Brown University’s Healthcare Leadership program and on several boards at non-profits and early-stage companies.