AI Model Bias Can Significantly Damage Trust Of Employees, Customers, And The Public, But There Are Ways To Prevent It

AI Model Bias Can Significantly Damage Trust Of Employees, Customers, And The Public, But There Are Ways To Prevent It
AI Model Bias Can Significantly Damage Trust Of Employees, Customers, And The Public, But There Are Ways To Prevent It

AI influences decisions across the enterprise, but bias can do far-reaching damage to trust and stakeholder relationships. The good news is that there are ways to protect yourself.

A large regional bank uses a newly developed fraud detection artificial intelligence (AI) algorithm to identify potential cases of bank fraud including anomalous patterns of financial transactions, loan applications, and new account applications. The algorithm is trained on an initial set of data to give an idea of what normal versus fraudulent transactions look like. However, the training data becomes biased by oversampling applicants over 45 years of age for examples of fraudulent behavior. This oversampling continues over a period of months, with the bias growing and remaining undetected. The model becomes more likely to think an older person is committing fraud than reality suggests. Customers are increasingly turned down for loans. Some begin to feel alienated while regulators start to ask questions. Trust is lost, the brand’s reputation suffers, and the bank faces significant consequences to its bottom line.

We know model bias is potentially a problem, but do we really know how pervasive it is? Certainly, media outlets write stories that capture the public imagination, such as the AI hiring model that is unfairly biased against women or the AI health insurance risk algorithm that unfairly assigns higher risk scores based on racial identity. But as bad as such examples may be, the AI model bias story hardly ends with what we read in the popular press.

Our research indicates that model bias could be more prevalent than many organizations are aware and that it can do much more damage than we may assume, eroding the trust of employees, customers, and the public. The costs can be high: expensive tech fixes, lower revenue and productivity, lost reputation, and staff shortages, to say nothing of lost investments. In fact, 68% of executives surveyed in Deloitte’s recent State of AI in the enterprise, 4th Edition report indicated that their functional group invested US$10 million or more in AI projects in the past fiscal year alone. Even internal-facing models can do significant harm and potentially put those millions of dollars of investment at risk.

To solve this problem, we need to go beyond empathy and good intentions. Understanding, anticipating, and, as much as possible, avoiding the occurrence of model bias can be critical to advance the use of AI models across the organization in a way that preserves stakeholder trust. The good news is that there are approaches that organizations can adopt – including technology-based solutions – that can help.

Model Bias Within Your Organization May Be More Prevalent Than You Know

The term “bias” carries many meanings. For the purposes of this study, we may consider Merriam-Webster’s definition of bias as “systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.” Generally speaking, AI model bias happens when the training data on which an AI algorithm or model relies is not reflective of the reality in which the AI is meant to operate. In other words, despite the use of the term “model bias,” a model is not biased in and of itself; rather, it’s the training data that renders a model biased. Stuart Battersby, CTO of AI enterprise software company Chatterbox Labs, concurs. “Regardless of context, often, [model bias risk] comes down to the training data,” used to inform the model and any training data is vulnerable to bias, according to Battersby.

Model bias is particularly troubling in part because it’s not always anticipated by organizations or those who are working with the AI models in question. These “weapons of math destruction” as Cathy O’Neil suggests in her book are secret and scalable, which can magnify their danger to an organization and its stakeholders.

Model bias is particularly troubling in part because it’s not always anticipated by organizations or those who are working with the AI models in question.

Evidence suggests that some users of AI models may be oblivious to this danger. Consider Deloitte’s State of AI report in which some three quarters of overall respondents say they are “confident” or “very confident” that their deployed models will exhibit qualities of fairness and impartiality. A similar share said they are “confident” or “very confident” that their deployed models will exhibit qualities of robustness and reliability. These data points are important because such characteristics as fairness and robustness are the hallmarks of models that operate as they should, without bias.

Stories of bias found in AI models that speak to societal discrimination and prejudice reside in multiple contexts, including college acceptance decisions, criminal sentencing and parole decisions, and hiring decisions, among many others. Many examples of model bias mentioned publicly relate to bias found in models that serve customer-facing functions. Our research indicates, however, that bias risks are prevalent whether we’re referring to models that affect customers or within the operational or internal part of an organization. Some of these model risks within the “back office” of an organization are often undetected until long after deployment and the accompanying impacts. Indeed, the risk of model bias within an internal operating domain like cybersecurity or compliance may be especially insidious as internal models may not receive the degree of public scrutiny that more outwardly facing deployments may receive, thus delaying their detection. Jayant Narayan, World Economic Forum Artificial Intelligence and Machine Learning Technology Policy lead, says: “Most AI model bias discussion is still on the external facing functions and the use cases of industries that are more customer facing. Companies should reassess bias and risk classification for their internal functions and use cases.”

Put another way, AI model bias is domain agnostic. In all of its forms, it can occur anywhere an AI model is deployed, regardless of context. Where context does matter, as we’ll discuss, is in the impact of model bias on trust.

Organizing The “Wild West” Of Model Bias

Several classes or archetypes of model bias emerged during our research. We identify two main groups of biases based on the type of action that impacts the model: “Passive” bias  –  where bias is not the result of a planned act – and “active” bias – where the bias occurs because of human action, either with or without intent and, even when intentional, often without negative intent. Both types of bias can manifest in different ways, and both should be considered when developing strategies to mitigate model bias risk. In characterizing bias in the classification that follows, we use our own terms as well as terms that are commonly observed in social science and technology literature.

Passive Bias:

Examples of passive bias may include:

  • Selection Bias: Overinclusiveness or underinclusiveness of a group; insufficient data; poor labeling. An example of selection bias may be found in an AI model trained on data in which a particular group is identified with a certain characteristic at a higher rate than objective reality justifies.
  • Circumstantial Bias: Training data staleness; changing circumstances. An example of circumstantial bias may include a predictive AI model trained on data that was accurate originally but is no longer accurate because of changing realities or “facts on the ground.”
  • Legacy Or Associational Bias: AI models trained on terms or factors associated with legacies of bias based on race, gender, and other grounds, even though unintentionally. One example is found in a hiring algorithm trained on data that, while not overtly gender-biased, refers to terms that carry a legacy of male association.

Active Bias

Examples of active bias may include:

  • Adversarial Bias: Data poisoning; post-deployment adversarial bias. A hostile actor, for example, gains access to a model’s training data and introduces a bias for nefarious objectives.
  • Judgment Bias: Model is trained properly, but bias is introduced by a model user during implementation by way of misapplication of AI decision output. For example, a model may produce objectively correct results, but the end user misapplies those results in a systemic fashion. In that sense, judgment bias differs from other model biases in that it is not the direct result of flawed training data.

The above grouping is far from exhaustive or definitive; other bias characterizations exist. Such speaks to the evolving and still nascent understanding of what model bias is and how it occurs.

Put another way, AI model bias is domain agnostic. In all of its forms, it can occur anywhere an AI model is deployed, regardless of context.

The Trust Connection: Model Bias May Be Exponentially More Damaging Than You Know

The impact of AI model bias can cascade across an organization by impacting its decision-making and trust with stakeholders. Decision-making and trust are two separate but interrelated concepts. Trust is the foundation of a meaningful relationship between an organization and its stakeholders at both the individual and organizational levels. Trust is built through actions that demonstrate a high degree of competence and intent, that result in exhibited capability, reliability, transparency, and humanity. Competence is foundational to trust and refers to the ability to execute, to follow through on your brand promise. Intent refers to the reason behind your actions, including fairness, transparency, and impact. One without the other doesn’t build or rebuild trust. Both are needed.

When a poor decision is made based on faulty analysis from biased data, an organization risks losing trust with stakeholders who may be relying on a model’s advice. This could manifest, for example, in board members who lose trust in an executive team that recommends an unprofitable project or employees who question the hiring of a less qualified candidate.

Once a decision error occurs and trust breaks down with a given stakeholder that stakeholder’s behavior can change. For an employee, this could mean less engagement at work, for a customer, lower brand loyalty or, for a supply chain partner, less willingness to recommend the business to others. These behavioral changes can have a meaningful impact on organizational performance, possibly limiting sales, productivity, and profitability. Ultimately, the lack of trust can prevent a company from fulfilling its goals and purpose with stakeholders.

Consider the bank to which we referred at this paper’s outset. In that example, AI model bias impacts decision-making in leading a bank to make unfair assumptions about older credit applicants and, as a result, avoid selling products to the older, underserved market. The reverse could also be true with bias leading a bank to grant loan applications to younger applicants who are actually engaging in fraud. And once this bias is known – even if the bank made efforts to correct it – bank professionals may lose confidence in the output of the algorithm. Indeed, they may lose confidence in AI models more generally. As a result, they may avoid important business decisions such as pursuing actual cases of fraud.

Multiple stakeholders are impacted by the model bias in this example. This bias, if it leads a bank to underserve the older banking customer, may alienate a constituency. This would put their trust and patronage at stake. It may also jeopardize the trust and business of other customers who become aware of and are offended by this bias, even if not directly affected. Because this bias may run afoul of various regulatory and statutory requirements as found in the Equal Credit Opportunity Act, it may damage the trust of regulatory authorities in ways that could result in civil penalties that affect the bottom line. Ultimately, the consequences of this model bias could harm the bank’s reputation and bottom-line performance.

This is just one of many examples of the consequence to decision-making and trust when AI models are unfairly biased (figure 1). The impact of AI model bias is typically not limited to one stakeholder group. On the contrary, the faulty decisions that result most often impact multiple stakeholder groups and can negatively influence their willingness to trust an organization. This context within which that bias takes place – the set of decisions, stakeholders, and behavioral changes that result – can define the stakes and cost to the organization.

The impact of AI model bias is typically not limited to one stakeholder group. On the contrary, the faulty decisions that result most often impact multiple stakeholder groups and can negatively influence their willingness to trust an organization.

To illustrate the individual character of model bias, we depict a few different case scenarios in which the nature of model bias could manifest and how decision-making and trust might be affected as a result (figure 1).

AI Model Bias Can Significantly Damage Trust Of Employees, Customers, And The Public, But There Are Ways To Prevent It - Figure 1
AI Model Bias Can Significantly Damage Trust Of Employees, Customers, And The Public, But There Are Ways To Prevent It – Figure 1

Model Bias Should Be Addressed In A Proactive And Holistic Way

Once an incident of model bias is found, the organization should “get under the hood” to assess the nature of the bias (including its causes), the ways it’s already affected decision-making and, ultimately, stakeholder trust, and how to prevent its reoccurrence. As Chatterbox Lab’s Battersby says, “You want to really get to the root cause as to why you have that bias and what that means within your organization in order to prevent it from occurring again.” With that said, reacting to a bias already in place is far less preferable than anticipating and preventing the bias from originating at all – or at least before deployment. Ted Kwartler, vice president of Trusted AI at DataRobot, puts it this way: “Finding bias in models is fine, as long as it’s before production. By the time you’re in production, you’re in trouble.”

The following set of guideposts can help organizations anticipate AI model bias across contexts. Such guideposts can help an organization to deploy AI models in ways that are fair and transparent.

  • Educate all within the organization about the potential for AI model bias risk. Even among those most directly involved in the development and deployment of AI models, biases are not always front of mind. For others throughout the organization, model bias is often an abstraction that only becomes an issue after the bias and its accompanying impacts become obvious. Leaders and workers within the organization – throughout the C-Suite and beyond – should understand the strategic imperative that model bias represents because everyone throughout the organization can be affected by it. Such education should target end users of the model across departments such as marketing and HR, so they can be alert to the potential for bias to exist and cautious that they don’t unintentionally introduce a bias through faulty implementation.
  • Establish a common language to discuss model risk and methods to mitigate it. Trustworthy AI, also known as ethical or responsible AI, shares common themes in the development and use of AI applications. These themes include fairness, transparency, reliability, accountability, safety and security, and privacy. Such themes provide a common language and lens for evaluating and mitigating AI risks, including model bias. Organizations can consider these themes when designing, developing, deploying, and operating AI systems. Each of these themes articulates an aspect of what, together, makes for trustworthy AI. Each supports the organization’s ability to deploy AI models competently and with the right intent.
  • Ensure that humans who are most impacted by the model are “in the loop” when developing the model. Our research reveals that humans tend to believe in the accuracy of AI model decisions without any real understanding of how the model works or was developed. This is an especially precarious practice when model bias enters the picture. Each part of the AI model life cycle should routinely reflect a partnership between the technology and all stakeholders. “Bias can be managed if there’s a human in the loop,” says Chatterbox Labs CEO Danny Coleman. But humans in the loop are not just those who develop and deploy the models. It’s also about the end consumers of the model’s decision outputs. They should be as much a part of how a model is developed (understanding what it can and cannot do) as anyone to mitigate the potential damage to trust that a problem can bring. Coleman calls this “managing stakeholder expectations.” And this involvement of stakeholders should start at the model conception stage. Preeti Shivpuri, Deloitte Canada Trustworthy AI leader, puts it this way: “Engaging consultations with different stakeholders and gathering diverse perspectives to challenge the status quo can be critical in addressing inherent biases within data and making AI systems inclusive from the start.”
  • Include process and technology as well. “Bias is a challenge. It’s always going to be there. But I think the best way to solve for it is with a people, process, and technology approach.” So says Chatterbox Lab’s Coleman. Humans play an integral role in the AI development life cycle and bias mitigation. But humans are only a part of a larger, integrated schematic that makes trustworthy AI possible.

Ted Kwartler, vice president of Trusted AI at DataRobot, puts it this way: “Finding bias in models is fine, as long as it’s before production. By the time you’re in production, you’re in trouble.”

In other words, any solution to the challenge of AI model bias should be holistically based on an integration of people, process, and technology. No one aspect of this three-legged stool is necessarily more important than another. Human judgement is important, as we mentioned. Process provides a sense of order and discipline to AI model governance. It includes monitoring and correcting for model bias that, together, help form the sequential steps of operationalizing machine learning models, sometimes referred to as “MLOps.” Technology, for its part, is the third leg of the three-legged stool. Without it, the model (and any model bias) would not exist. But technology is also part of the solution. Software platforms are now being developed that can help organizations uncover bias and other vulnerabilities, and help ensure that a model operates fairly.

Moving Forward With Intention

Building trust with stakeholders is a multifaceted, complex challenge. We are all connected. When trust breaks down with one stakeholder, others become aware and may change their behaviors as well.

AI and trust share an inseparable relationship. Trust cannot flourish in an environment that relies on flawed AI and even the most unbiased AI model can provide decision outcomes that matter very little if they serve an untrusting environment. The primary reason that organizations should think about AI model bias is that – more than many issues – bias has the potential to undermine this relationship.

Organizations should meet the challenge of AI model bias with the sense of urgency that such a consequential issue deserves. To some, model bias may seem like an emerging, far-flung abstraction. But it is real. And the damage it can cause to stakeholder trust is real, whether organizations focus on it or not.

But there is a path forward. Organizations have at their disposal the tools and resources to help address the challenge of AI model bias before it manifests – through a holistic approach that includes education, common language, and unrelenting awareness. The organization that chooses a proactive approach now will likely have a leg up on the organization that is required to take a reactive approach later.

originally posted on deloitte.com by Don Fancher Beena Ammanath Jonathan Holdowsky Natasha Buckley

About Authors:
Don Fancher: Global Leader, Deloitte Forensic
Don Fancher is a Deloitte Risk & Financial Advisory Principal with Deloitte Financial Advisory Services LLP where he serves as the Global Leader of Deloitte Forensic as well as the Co-Leader of Deloitte’s Legal Business Services practice. Mr. Fancher has over 30 years of experience assisting clients and leading practices in forensic, dispute consulting and legal transformation. He currently leads over 4,500 Deloitte professionals around the world serving clients in areas such as financial crime, disputes and investigations, business insurance, discovery, data governance, legal transformation, and contract lifecycle management. Mr. Fancher has significant experience assisting clients and counsel in performing forensic investigations and special reviews for matters regarding financial crime, misappropriation of assets, breach of fiduciary duty, and FCPA violations. These have included both individual employee and institution-wide schemes for misappropriating funds and/or improperly reporting asset values and financial performance. These investigations have been performed for privately held companies, private institutions, not-for-profit entities and major, publicly held corporations. Mr. Fancher also has extensive experience consulting with counsel and management on accounting, financial, and economic matters as they relate to facts and the determination of damages and values in litigation, dispute resolution, and business insurance matters. He has testified in numerous commercial litigation and class-action matter throughout a number of jurisdictions in Federal, State, and Bankruptcy Court. Mr. Fancher has also provided consulting services in a variety of matters regarding the assessment and evaluation of intellectual property portfolios to assist his diverse client base in the management of technology and intangible assets. He has provided assistance in licensing negotiations, technology and intellectual property valuation, market and industry assessments, and intellectual property portfolio management. He has also provided commercialization, monetization and business and intellectual property strategy assistance to clients. Beyond intellectual property, Mr. Fancher has been active in providing innovative solutions and strategies to legal departments in areas of workforce transformation, technology and workflow assessments, and contract lifecycle management implementation. Mr. Fancher earned his BBA in Finance from Texas A&M University and his MBA from Baylor University. He has authored numerous publications, provided presentations, and been quoted extensively in periodicals and publications on subjects around a variety of regulatory compliance, forensic, dispute and legal technology issues. Mr. Fancher also hosts Deloitte’s Resilient Podcast series focusing on legal and regulatory issues. He is active with the American Heart Association, having served as the Past-Chairman of the Southeast Affiliate Board, while also serving on the National Board and Audit Committee. He is also a Certified CrossFit Level 1 Trainer and holds a Secret Level Security Clearance from the United States government.

Beena Ammanath: Executive Director, Global Deloitte AI Institute
Beena is Executive Director of the Global Deloitte AI Institute and leads Trustworthy AI & Ethical Tech at Deloitte. She is the author of the upcoming book releasing in spring 2022 – “Trustworthy AI” – which helps businesses navigate trust and ethics in AI. An award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as GE, HPE, Thomson Reuters, British Telecom, Bank of America, e*trade and a number of Silicon Valley startups, Beena is also the Founder of non-profit, Humans For AI, an organization dedicated to increasing diversity in AI. Beena also serves on the Board of AnitaB.org and the Advisory Board at Cal Poly College of Engineering. She has been a Board Member and Advisor to several technology startups. Beena thrives on envisioning and architecting how data, artificial intelligence, and technology in general, can make our world a better, easier place to live for all humans.

Jonathan Holdowsky: Senior Manager | Deloitte Services LP
Jonathan Holdowsky is a senior manager with Deloitte Services LP and part of Deloitte’s Center for Integrated Research. In this role, he has managed a wide array of thought leadership initiatives on issues of strategic importance to clients within consumer and manufacturing sectors.  Jonathan’s current research explores the promise of such emerging technologies as additive and advanced manufacturing, Internet of Things, Industry 4.0, blockchain, among other areas. Jonathan is based in Boston, MA.

Natasha Buckley: Senior manager | Deloitte Services LP
Natasha, Deloitte Services LP, is a senior manager in Deloitte’s Research & Eminence organization where she studies how companies across industries and geographies are progressing in their digital journey. For the past five years, she has helped lead research and analysis for Deloitte’s annual digital trends study conducted in collaboration with MIT Sloan Management Review. Prior to working at Deloitte, Natasha worked in management consulting.