This blog is a continuation of the Building AI Leadership Brain Trust Blog Series which targets board directors and CEOs to accelerate their duty of care to develop stronger skills and competencies in AI in order to ensure their AI programs achieve sustaining results.
In this blog series, I have identified forty skill domains in an AI Leadership Brain Trust Framework to guide board directors and CEOs to ensure they can develop and accelerate their investments in successful AI initiatives. You can see the full roster of the forty leadership Brain Trust skills in my first blog.
Each of the blogs in this series explores either a group of skills or does a deep dive into one of the skill areas. I have come to the conclusion that to unlock the last mile of AI value realization that board directors and CEOs must accelerate building a unified brain trust (a unified set of leadership skills that are hardwired in relevant digital and AI skills) to modernize their organizations more rapidly.
Knowledge is key and if you locked up a room of board directors and CEOs in a board room and asked them
- What steps are required to build a successful AI Strategic Plan and Journey Roadmap – what do you think would be the outcome?
- Where are your AI Investments and have you inventoried them or audited them?
- What is the difference between a computing scientist, a data scientist, and an AI scientist – would their digital literacy skills be sufficient enough to lead and guide their organizations forward?
- What has been your Return on Investment (ROI) and value realization in your AI programs or AI products/solutions?
Sadly, I think we would find some very serious operational execution gaps in realizing the last mile in AI. A great deal of R&D exploration and AI modeling exploration is underway but moving to sustaining operating practices and ensuring the ongoing knowledge of AI modeling outcomes, and value realization practices remain a major gap in the strategic deployments of AI programs.
Last month, I discussed the importance of Agile Literacy and in this blog, I will discuss the importance of User-Centered Design Literacy, one of the key technical literacy skills in building AI capabilities that are robust and operational focused.
Research Methods Literacy
Agile Methods Literacy
User-Centered Design Literacy
Data Analytics Literacy
Digital Literacy (Cloud, SaaS, Computers, etc.)
Sciences (Computing Science, Complexity Science, Physics) Literacy
Artificial Intelligence (AI) and Machine Learning (ML) Literacy
User-Centered Design Literacy
Last month, we discussed that skilled AI data scientists must be agile, as AI projects require investigative methods that are agile, and resilient to ensure executives have confidence in the exploration process. At the same time, agility is required to continually provide value outcomes to build executive sustainability confidence.
Hence, a balance between demonstrating short-term value is equally as important as longer-term value. Having robust communication practices is also critical in AI programs, as in the fast-paced and often attention-deficit economy, board directors, CEOs, and many executives need to reassure confidence that their AI investments are generating positive returns.
This blog defines user-centered design literacy and frames the implications and risks of data bias and fairness AI principles if they are not front and center in AI design practices.
It is imperative that AI project teams have strong user-centric experts guiding diversity, inclusivity and fairness approaches in framing all aspects of an AI product or solution development process.
What Does User-Centered Design (UCD) Mean?
The first key point to understand is that UCD is an iterative and agile process in which solution designers focus on the customers/end-users and their needs in each phase of an AI product/solutioning process.
It is key before building an AI-centered product/solution that the design teams involve the customers (end users) that will be using the system throughout the end-to-end design process. This is required to ensure that the AI project outcomes are usable, accessible, and enable sustainability.
There are many UCD methodologies in the market, and the diverse approaches usually follow these four distinctive designs.
First, it is standard in any software development project to appreciate the operational context in which users would interact with an AI product/system? This involves clearly defining what problems the AI system is striving to solve and framing how the users would use the desired outcomes produced from an AI model?
Second, framing each of the problem statements clearly with aligned end-user requirements that defines how the problem will be resolved in the product experience is the next step. Ensuring each of the end-user requirements has a clearly stated benefit /desired outcome answering So What is especially key, as AI initiatives are very costly, so ensuring the design teams explore all types of methods to solve the user challenge is key, as they may well be alternative ways to solve the use case.
Third, once the context and clarity of the business problem(s)/challenges are defined with matching customer requirements, and benefit /value (outcome) statements, then the solution design teams should be engaged. The design teams should include a diverse team of skills and backgrounds, and especially ensuring a representative team of the demographic that the solution is targeting to bring value to. The design team should include professionals from multiple disciplines (e.g. business users (domain experts), customers (users), ethnographers, software or hardware engineers or both, statisticians, mathematicians, financial experts, human resources, legal and privacy experts, etc.). Ensure from the get-go, the end-user life cycle defining the problem from the first experience (job to be done) is clearly mapped so each customer experience “or moment of truth” is documented, and also validated by end-users (customers). Too often end to end user design constructs are not carefully thought through, marginalizing the design success odds of successful outcomes.
The fourth phase is in which the design team works closely with the software engineering team to ensure that all the desired outcomes are validated against the end user’s context and customer requirements, and evaluate how well the design experience is performing. Continual end-user feedback is key to ensure a validated evaluation review is completed.
In summary, user-centered design is an iterative process that strives to define the end user’s needs against all the stages of design, development, and validation.
What Is A Major Risk In UCD In AI Projects?
One of the biggest risks in AI projects is that the data set that is used to train the AI models is not representative of the desired outcomes, especially if the historical patterns that the AI Algorithm analyzes is not relevant to all the populations that the solution is striving to predict solutions for.
For example, it is estimated that over 70% of image databases that are used to train AI models have some form of bias. The problem with biased data sets is not new. The UK Commission for Racial Equality in 1988 found that a computer program that was biased against women with non – European names. The program was set up to match admissions based on historical data that did not represent the future goals of the university. The medical school was found guilty of discrimination.
Numerous criminal justice AI Algorithm cases abound where AI algorithms have mislabelled African- Americans as high risk often twice more than the rate the Algorithm classified white males. AI is merely a reflection of the data set you feed it – so acute care is needed to ensure that data bias in AI models is not skewing results, and generating stereotypes.
Joy Buolamwini, a well respected MIT Researcher, has found that facial image analysis technology has higher error rates, particularly with minority women, due to data bias in AI training data sets.
In UCD design projects, it is imperative that an ethical perspective be integrated into all aspects of the AI projects, and in particular, ensure that the machine learning systems are not predicting outcomes that are biased. This is particularly important in black box AI methods where the underlying factors for the prediction are not as easy to understand or interpret.
If the design teams ensure that data bias is not a risk in the training data set, then traditionally disadvantaged groups may well benefit from improved predictions.
Ensuring the AI projects have models that ensure fairness, such as requiring the AI models to have both equal predictive value across different groups is key. Incorporating the views of data bias and fairness into the overall AI projects UCD and implementation process can go along way to ensuring increased diversity and inclusiveness is at the heart of new AI solutions hitting the market.
Where Can CEOs And Board Directors Go To Learn More About User-Centered Design (UCD) And In Particular Data Bias And Fairness?
The AI Now’s Institute: This site is particularly useful as it has a number of invaluable reports on AI developments, and reinforces the importance of user-centric design practices have fair AI approaches throughout the software development process.
The Partnership On AI: This site also has a number of research reports that draw upon diverse industry stakeholders points of view on topics like why organizations should prioritize worker well-being to why must leaders advance their design knowledge to ensure that the role of demographic data used in AI training model approaches are fair and not bias, etc.
Alan Turning’s Institute: Fairness, Transparency Privacy Group – This site is full of rich online video gems like Turing Research Fellow Dr. Brent Mittelstadt’s research addressing the ethics of algorithms, machine learning, artificial intelligence, and data analytics to sharing Suchana Seth’s research on identifying what measures of fairness are and the implications for technology policy and regulation, and also defining what AI fairness challenges are in implementing fairness methods.
Google AI: Google also has a number of useful bias and fairness frameworks and toolkits to help guide software designers and engineers to build more UCD practices.
IBM – Fairness 360 Framework: IBM has developed a clear position on AI Fairness and identified over 180 forms of data bias risks in software toolkits to help guide AI projects to successful outcomes.
In summary, ensuring all UCD practices integrate AI Ethics and Fairness methods into all AI projects is an area to increase embedded ethics knowledge in computer science and AI programs.
Harvard has developed an open curriculum on ethics to help guide companies in developing an ethics position on what they are building.
As discussed throughout my blog series, AI has many benefits for business, the economy, and for helping to solve many of our world’s deepest challenges. However, to get to the other side, ethical AI and data fairness practices must be user center design cornerstones to ensure we don’t just replicate and speed up the biases ingrained in so many aspects of today’s world.
AI if done well will make a stronger world – a more equitable, diverse, and inclusive world – but if not taken seriously in the early design phases of AI projects, we will face tremendous challenges ahead.
Three key questions that board directors or CEOs could ask in their respective board meetings relating to UCD are:
- Does the company have a clearly defined User-Centered Design Methodology underpinning its AI programs? If not, why? Is building UCD skills a focused skill development area and is it adequately funded? How is UCD maturity being measured?
- Does the UCD methodology integrate ethical AI and fairness methods throughout all stages of the UCD design, build, implement and sustain operating process?
Board directors and CEOs need to step up more and ensure that their digital business models that are leveraging AI have strong foundations where UCD is recognized as a critical skill competency to build trusted AI centers of excellence.
originally posted on forbes.com by Cindy Gordon
About Author: Dr. Cindy Gordon is a CEO, a thought leader, author, keynote speaker, board director, and advisor to companies and governments striving to modernize their business operations, with advanced AI methods. She is the CEO and Founder of SalesChoice, an AI SaaS company focused on Improving Sales Revenue Inefficiencies and Ending Revenue Uncertainty. A former Accenture, Xerox, and Citicorp executive, she bridges governance, strategy, and operations in her AI contributions. She is a board advisor of the Forbes School of Business and Technology and the AI Forum. She is passionate about modernizing innovation with disruptive technologies (SaaS/Cloud, Smart Apps, AI, IoT, Robots), with 13 books in the market, with the 14th on The AI Split: A Perfect World or a Perfect Storm to be released shortly. Follow her on Linked In or on Twitter or her Website. You can also access her at The AI Directory.