You cannot escape the everyday realities of Artificial Intelligence (AI). All Fortune 1000 business leaders in diverse industries have AI focused initiatives well underway. Global companies in both enterprise and mid-markets are rapidly innovating to grow new revenues, increase profits, and discover new value in product and service offerings via the alluring promise of an AI advantage.
Despite the growth of AI, board Directors and CEO’s are still well behind in AI language literacy, and risk management practices. The acceleration of AI is almost like a tempest, where a perfect storm may be brewing as few board Directors and CEO’s can answer this question. Where are your AI algorithms (Algo’s) and models located and do your AI algos/models have risk profiles?
AI growth is like the Wild West: a new global research report released in early July, 2020, stated the AI market is growing at over 42% CAGR, and will reach over $733.7B USA. According to MIT Sloan Research over 90% of larger enterprises are using AI to improve their customer interaction journeys. The growth of AI start-up investments is reminiscent of the dot com bull market and you may recall the 76% bull market drop in March, 2002, which created an awakening on the importance of value realization and profitability.
According to CBI Insights, $26.6 billion was invested in 2019, spanning more than 2,200 deals worldwide and the outlook, despite Covid19, the growth in health emergency and transformative technologies like smart machines, robots for health care, are rapidly emerging AI solutions to help contain the epidemic.
On average, investments in advanced analytics will exceed 11% of overall marketing budgets by 2022. Spending on AI software will top $125B by 2025 as organizations weave AI and machine learning tools into their business processes.
You might think that with all this growth activity that more Board Directors and CEO’s could easily produce a comprehensive list identifying where all their AI algos/models are and provide a robust risk profile and be able to demonstrate value realization with clear Key Performance Indicator(s) (KPI’s) and Return On Investment (ROI) markers.
Unfortunately, many companies have been lured into AI programs with black box AI practices, meaning clear accountabilities are not easily evident, transparent, let alone audited to manage risk. Board directors and CEOs know where their employees are located, whether they are working remotely or in an office, who to contact for customer service or personal issues.
Yet, I don’t know of one global company where a Board Director or a CEO can produce in less than five minutes, a comprehensive list of all their AI algo/AI model assets across their enterprise operations and know the last revision model date, and have robust risk classification evidence, verified by third party-auditors.
With the democratization of data which is the foundation of AI enablement, AI and Machine Learning (ML) KPI’s must be elevated to have more importance like our Financial KPIs, deriving increased transparency, like auditors have been disciplined with fiduciary accountability of profit and loss statements. Our world has changed and data is now our most strategic asset, yet few companies are role model in their data management practices, knowing easily where data is designed, collected and stored to enable and track the value of AI model transformations.
Few companies have mature AI centers of excellence where machine learning operations (MLOps) is a competency center, although many companies are now starting to invest in ML Ops. Forbes contributor, Ron Schmeltzer, recently profiled the Emergence of AI ML Ops with an excellent summary to advance knowledge in this area. In addition, a recent study from New Relic found that 89 % of 750 global senior IT decision-makers surveyed believe that AI and machine learning are critical in how organizations run their IT operations. Nearly 84% of the respondents confirmed that AI, and machine learning will make their role more manageable. This optimistic and positive outlook for AI will accelerate the adoption of improvements in data management practices, which are key to AI modelling and risk management practices.
My own research from speaking directly to over 500 C-levels, over the past 18 months, in both mid to large B2B enterprises across the globe, I was not able to identify one company that can produce in five minutes the answers to these questions below.
Asking the right AI questions to keep leading foreword – each project that is using an AI algorithm or series of AI algos to build a customized AI model to solve a specific problem or business challenge should be able to answer questions like these:
Use Case(S) History
- What use case(s) was the AI model/algorithm(s) used for?
- What business problem, or challenge was the AI model/algo’s solving?
- What was the initial estimated value (ROI) of the AI model and methods to the organization before designing, building, and implementing the use case?
AI Model Ownership History
- Who wrote the algorithm or built the AI model?
- Is the process owner currently with the company?
- Is there a secondary process owner for the AI model, given the risk implications of the AI model and algorithmic approaches?
- Was the algorithm and the model structure audited by someone other than the creator? Is so who?
Creation And Revision History
- When was the AI model /AI algorithm created?
- How many revisions have been made to the AI model/AI algorithm(s) since its first production release?
- What type of AI algorithm(s) are being used?
- Are the algorithm’s open source if so which ones?, or did someone write a unique AI algorithm to solve your unique business challenge?
AI Algo/Model Methods History
- What is the mathematical structure/formula for the AI algorithm?
- Has the math been verified by a third party expert to verify accuracy?
- Who is the current AI model /algorithm(s) process owner that oversees the model that the AI algorithm(s) is /are operating on?
- What were the data types (structured/unstructured data) and data sources (internal, external, both, etc.) used for the AI model development?
- What was the size of the data set?
- Was the dataset cleansed prior to analysis? If so, who did the cleaning and what methods were used?
- What as the quality and accuracy of the data sources used in the AI model?
- What was the baseline predictive accuracy score compared to all the version history?
- Is there a risk class for the AI model and AI algorithms used and a risk mitigation plan?
- Was the AI model developed tested for data bias?
- What data bias methods were used?
- How many types of data bias risk reviews were completed?
- When was the last time the AI model was reviewed and optimized/retrained?
AI Algo/Model Value Realization
- What was the value that the AI model outcomes for the organization in terms of return on investment?
- Are there efficiency or effectiveness value outcomes clearly defined supporting the ROI?
- How accurate was the first use case ROI /value outcome prediction compared to the actual AI production ROI outcome(s)?
- Were the AI value outcomes validated or audited and signed off by financial experts, or third party experts? If so, was there a report filed?
- How does this AI model approach compare to other industry best practices?
- Is there an active process improvement plan for the AI model/on file?
If you have all these answers on record, I want to hear from you, as you are world-class in being acutely aware of the accountabilities that need to be in place in using AI to advance business models.
Although there are many other questions, to govern an AI Center of Excellence to track the evolution of AI model(s), an annual audit risk assessment and governance operating process is an area for board directors and CEO’s to lead forward with.
Unfortunately, it is more often the case that an AI model is created by a data scientist, a computer programmer, or a professional services firm (third party supplier), each striving to build a specific AI model, whether its predicting the intensity of the second wave of Covid-19 in the USA hot zones, where over 20% of the American public now live or determining the insect harvest risks to fruit trees, using drones with AI technology or predicting revenue forecasts, or ensuring the underlying risk management practices of AI approaches , etc. and for the majority of times, those players designing and building the AI models have the best intentions.
Board directors and CEO’s must appreciate that AI literacy is a new competency that they need to develop and also recruit the right talent to advance their organizations. AI models need nurturing to move successfully into production environments, and investing in modernizing data management infrastructure is key to enable data and machine learning ops to modernize successfully. Executives make mistakes if they don’t monitor the AI model production environment, retrain the model, and over time, augment with additional data sources to deepen the model insights, etc.
AI is like planting a garden and replenishment with nutrients and weed removal is a long term investment to harvest beauty. AI is not like a sculpture where you create the model and admire the artistry in its original formation state for years to come. I like to refer to AI as the new oxygen versus the new oil, as AI is increasingly pervasive and like climate change, ebbs and flows will be everywhere. Being able to see how the garden grows beyond your backyard will require tremendous foresight to plan wisely.
Unfortunately for board directors and CEOs, many of their technology leaders, or CIOs are not well skilled in AI and data science practices, further creating risks to companies that are advancing into AI practices. The rise of Chief Data Officers (CDO) and Chief Data Scientist Officers (CDSO) are advancing the AI and model building and risk management practices, although the investments into the data enablement support systems in most companies have to pick up the pace in investments to herd the algos and AI models to safer pastures.
Board directors and CEOs have a responsibility to step forward in AI governance by ensuring an AI Audit and Risk Management framework is being thoughtfully operationalized. The questions outlined in this article can go along ways to leading forward.
Stay tuned for my next blog where I will dive into AI data bias and the imperative for board directors and CEOs to lead forward with more responsible AI governance practices. #AILeadForward
Dr. Cindy Gordon is a CEO, a thought leader, author, keynote speaker, board director, and advisor to companies and governments striving to modernize their business operations, with advanced AI methods. She is the CEO and Founder of SalesChoice, an AI SaaS B2B company focused on Improving Sales Revenue Inefficiencies and Ending Revenue Uncertainty. A former Accenture, Xerox and Citicorp executive, she bridges governance, strategy and operations in her AI contributions. She is a board advisor of the Forbes School of Business and Technology, and the AI Forum. She is passionate about modernizing innovation with disruptive technologies (SaaS/Cloud, Smart Apps, AI, IoT, Robots), with 13 books in the market, with the 14th on The AI Split: A Perfect World or a Perfect Storm to be released shortly. Follow her on Linked In or on Twitter or her Website. You can also access her at The AI Directory.
originally posted on forbes.com by Cindy Gordon