Creating AI Brain Trust | A Board Director And CEOs – Development Of Stronger Skills And Competencies

Creating AI Brain Trust | A Board Director And CEOs - Development Of Stronger Skills And Competencies
Creating AI Brain Trust | A Board Director And CEOs – Development Of Stronger Skills And Competencies

Either modernize or atrophy is a reality for laggard companies not preparing to compete in the increasingly intelligent world, where sensors and AI methods will be embedded in every business process.

Over 80% of the companies investing in AI do not drive ongoing operational practices, rather AI is an investigative approach used to answer difficult questions, but unfortunately sustaining sponsorship to sustain the AI models, in many cases, simply dies off.

I have identified over 40 overall skill domains in the AI Leadership Brain Trust Framework, and you can see the full roster, see my first blog. See the links to the series below for more information

This blog completes ten business skills and identifies key discovery dialogue questions to advance an organizations’ business skills from an AI perspective.

Business Skills:

  • Customer Orientation: One of the easiest ways to advance AI in your organization is to identify all your core customer operating processes and be clear on the data density dimensions (volume, size, type, quality, etc.). Having one executive PowerPoint that defines your organization’s end to end (E2E) customer operating value chain and highlighting the quality and ease of data access around all your customer data repositories is a good starting point to understand how many customer data moats you have flowing in your organization’s business processes. Key customer process areas, usually ripe for AI, are often founded in customer relationship management systems (CRM), where customer contact history can be easily tapped to solve use cases like: identifying the best sales leads or sales opportunities for sales coverage, predicting sales forecasts based on historical data sets, identifying which customers have the highest odds of becoming a VIP (very important person) customer, identifying which customers have higher churn (loss) risks, identifying a customer’s personality type to correlate with your sales coverage strategy (best rep – best customer fit), using AI to rewrite your sales or marketing emails based on the natural linguistic voice of your customer, or even using AI voice coaching demonstrating how to to alter your voice to create more emotional connection with your customers to increase propensity to purchase or demonstrate more empathy to reduce customer churn to even having AI chatbots answer customer questions or complete a customer order. There are thousands of AI customer use cases that can add value to the customer experience; what’s key is identifying a challenge or a problem area and then advance the use case that can add the most value relevant to solving the specific customer challenge. There is nothing more important than customer growth, increasing customer’s share of wallet, and tracking your customer’s lifetime value (LTV).
  • Problem Solving Orientation: Most leaders do not know that the most important part of an AI journey is identifying the business problem clearly that you want to solve, and ensuring the problem has diverse stakeholder’s support. The problem must be significant as the journey to solve the challenge may be months or years; either way, building a sustaining operating process around the AI observations or insights requires ongoing nurturing. AI is not like other problem-solving methods because once an AI model is created, it behaves like a young child in its early manifestation stages. An AI model is always hungry and wants to learn more, hence more food (data in this case) is needed. So continual reflection by data scientists to find new ways to improve the AI model is a constant reality. This can be actioned either with refreshed data or adding in new data, or even shifting or augmenting current AI methods to strengthen the prediction accuracy is an ongoing responsibility of data scientists charged with securing trusted outcomes. It is important that board directors and CEOs clearly understand that AI models easily atrophy if they not maintained. Hence, organizations must understand and plan for AI sustainability practices, or their AI models will act like abandoned floating code going nowhere. Instead, if the AI models are nurtured with data nutrients and ongoing care, promising growth can propel organizations to new performance heights. An AI journey is like an ever ending discovery process, and the more problem definition clarity and relevant context or sense-making achieved will increase the odds of the AI investments having a positive return to its stakeholders.
  • Analytical And Research Rigour: AI is an intensive analysis and research discipline and many model experiments need to be performed to find the strongest predictive accuracy to help answer the identified business problem. Keeping track of the AI model type, research method(s) used requires careful note-taking, and data scientists need to be evaluated on the quality of their documentation and research record keeping skills. This is in most organizations a relatively messy area, as quality inspection practices and processes in the model building have in many organizations evolved like the wild west, and few historical modeling roadmaps are easily retrievable. Fortunately, this is all starting to rapidly change, as new machine learning operations (MLOps) software tools are in a growing market, bringing integrated AI model lifecycle management tools providing: model versioning /inventory management controls, model performance monitoring, model discovery (research) and model security (robust history so models are not orphans.) Some of the leading AI market players include: Amazon, IBM, Microsoft Azure, and smaller emerging market leaders like: Data Robot, Dataiku, Data Splunk, H20.ai, Modzy, SignalFx, to name a few. According to Forrester, the MLOPs market will be over $4B by 2025, and five years ago, this software category barely existed. This is a major operational risk gap as the majority of board directors and CEO’s knowledge on AI infrastructure enablements like MLOps is weak. Just like companies had to invest in supply chain management (SCM) or CRM practices, to get AI right, MLOps and cloud infrastructure investments are also required to manage the AI model(s) lifecycle(s) and ensure data lineage practices are robust and secure.
  • Communication /Channel Relevance (Social, Written, Voice, Etc.): Using AI in business requires an effective governance communication plan, and thinking through the most optimal communication channels to reach the relevant stakeholders that will be impacted by AI is often an under-budgeted area in AI programs. Too often AI projects are left with only technology centric resources who squirrel away delighted with their diverse data nuts collection and unfortunately, they often do not communicate effectively to increase stakeholder confidence that their AI model programs or projects are adding value to the organizations’ business goals. Whether clearly defined communication channels are in weekly or monthly management review meetings, it is imperative that board directors and CEOs ensure that their AI program teams develop clearly defined communication plan(s), using robust key performance indicators (KPIs) in order to secure diverse stakeholder alignment. Do your AI programs have skilled communication and change management resources engaged to support your AI investments? Do your people understand why the investments in AI are being made, and the value/relevance/impact to their role in the short term and also in the long term. You never want to forget WIFM (What is in it for me?) in your AI program communication.
  • Ethical Robustness (Transparency, Trust, Bias, Privacy): Do you have an AI ethical and data bias expert in your AI programs? How are you monitoring your AI programs against AI risks to ensure that you are developing trusted AI practices? If there is an area that board directors and CEOs need to worry about in their AI programs, it is ensuring that their AI programs have quality ethical reviews. Even the types of questions that your organization explores will have ethical boundaries. For example, putting in an image monitoring system that measures the number of times an employee blinks or looks away in a Zoom call, could alert management that you may be bored, disinterested, or not paying attention. This may also sound like George Orwell’s 1984 using surveillance cameras. That being said there are many companies integrating this type of technology in collaboration software platforms evaluating your every movement, from an AI image (facial and non-verbal cues or even your physical posture/gait), or word classifications (positive or negative sentiment) or your voice (sound and emotion), or monitoring your heartbeat, and blood flows – all together giving increasingly more insights on how you may be feeling. This is where the world is heading -whether we like it or not. That being said these are ethical discussions that leaders will need to increasingly think about as AI has already matured into employee recruiting systems attempting to detect if you are lying, or avoiding questions if you blink, etc, yet you may well have a nervous eye tick and miss the job screening as the AI is not smart enough to understand this nuance unless you build in more pre-screening questions for more accurate health accuracy detection measures. More concerning is ensuring the data set that you are using is representative of the population and the problem type you are trying to solve. Many data sets are data biased and draw inaccurate conclusions which can have legal consequences. The regulatory environments are in their infancy on AI ethical robustness, but 2021 will bring increased AI regulatory guidelines, from IEEE Standards. In the meantime, every board director, or CEO should be aware of two sources: the Organization for Economic Co-operation and Development’s recommendations for responsible stewardship of trustworthy AI, which forty-two nations have co-signed, and the European Union’s High-Level Expert Group’s Ethics Guidelines for Trustworthy AI. Ensuring that your AI program has an AI ethics review at the problem definition stage, and at the data sourcing and data methods stage are key risk management review gates to ensure the desired outcomes (outputs) are not a legal or reputation risk to the company. IBM has software to detect data bias types, called AI Fairness 360, which includes a comprehensive set of metrics for data sets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in data sets and models. In larger organizations, we will see new leadership roles like AI data bias risk officers (DBRO’s) who will be akin to auditors with specialized data management, ethical and security designations. Data AI model risk officers will have a quality and risk management responsibility to review and sign off on AI models, or companies will increasingly engage an audit firm, with expertise in Trusted AI and risk management. EY is one of the most progressive global leaders in AI trust sense-making and has built a strong AI evaluation methodology on TrustedAI. Although KPMG, and PWC and many boutique firms also have AI audit practices, EY has a strong comprehensive TrustedAI framework that is one of the most comprehensive that I have reviewed to date.
  • Program And Project Management: Like in any major technology deployment program, using advanced technologies like AI, ensuring the solutioning team has strong skills in program management (more complex program skills overseeing multiple project management streams) and project management skills are most relevant to AI initiatives. Key questions include: Does your organization have skilled program and project management practitioners certified in well-recognized methods like PMP (Project Management Professional or PgMP (Program Management Professional) will increase an organization’s ability to manage risk and cross-functional teams. How many employees have these types of certifications in your company? Are your employees supported to take these types of certifications (PMP, or PgMP) as part of your organization’s talent management development programs?
  • Process & Data Management Orientation: Process management skills have been a cornerstone of business for many years in training employees to understand the end to end process workflows, and building skills in process mapping, total quality management, or 6sigma certifications. All of these skills have contributed to improving business process management capabilities. How many trained experts do you have in process-related certifications like 6Sigma or TQM? Understanding the value chains of process and data flows is a key skill in organizations to ensure data lineage (defined: data lineage includes the data origin, what happens to it, and where it moves over time. Data lineage gives visibility, while greatly simplifying the ability to trace errors back to the root cause in a data analytics process.) When one starts an AI Project, after defining the business problem, the first question that rapidly emerges is: where is the relevant data to solve this business problem? Does the data reside in one repository or in multiple repositories? What are the quality and the accuracy of the data sources? Who owns the data sources? What are the types of data (what percentage of the data is structured versus unstructured)? How easy is it to aggregate (bring together) the diverse data sources? Where will the data inventory be stored? How can the data sources be continually updated/replicated so the data modeling does not atrophy rapidly in experimentation vs be enabled and sustained in production? How do you ensure you can continually capture and clean the data? Key questions that board directors and CEO’s can strategically focus on versus the mechanics of these prior questions are: How many of our resources are trained in formal data management methods? Two of the leading data management certifications are DMBOK (data management body of knowledge) or CDMP (certified data management professional). This is an area I continually see in companies as being very weak in having integrated business processes and data lineage skills. To be ready for AI, data ease of access and its quality is everything. Building centralized data repositories and cloud data lakes are important building blocks for board directors and CEO’s to ensure the company’s operating structures are being modernized, otherwise, they will never be able to build or sustain internal AI solutions to evolve their business models. In summary, it is important to remember that in AI programs or projects, about 80% of the early project set-up is data related, so data skills are most critical in building relevancy to AI initiatives.
  • Measurement – Key Performance Indicators (KPIs): in AI, there must be clearly defined measurement systems identifying all the KPIs relevant to AI. One of the frameworks for KPIs is the SMART framework which stands for ensuring your KPIs are: Specific, Measurable, Attainable, Relevant, and Time-Sensitive. CEOs or board directors must ensure there are clearly defined KPIs that are strategic and measurable over time, as AI models need continual refreshed data, retraining, and the model’s predictions need to be in an acceptable range that enables organizations to trust AI. Operational KPIs in AI projects could be increasing customer retention as a result of deploying AI chatbots or increasing customer conversion (purchasing) rates from using AI guided selling software that improves yield and customer target relevancy, or identifying fraudulent behaviors to reduce risks and losses, etc. Other KPIs on AI projects may be focused on the classification accuracy in terms of true positive rates (TPRs), or false-positive rates (FPR’s). For example, a strong AI Algorithm that has 1% of false positives is a very strong accuracy score. Other AI KPIs could be Mean Absolute Error (MAE) which is mainly used in linear regression models and looks at the absolute difference between the actual and predicted value (variance). Another popular method is the root mean squared error (RMSE) which gives weight to larger errors and all errors are squared before they are averaged. A relatively new KPI invented by Google is the sensibilities and specified average (SSA) which analyzes if something makes sense (sensibility) and whether it is specific enough. Implementing SSA has a more human judgment, but with external reviewers developing pre (prior) and post metrics, the SSI metric is most valuable as it creates a context for easier comprehension. If humans cannot experience, understand and trust, the AI benefits/or value, it really does not matter how strong the prediction accuracy scores are. Hence, CEO’s and board directors must ensure relevancy, trust, and value realization are strongly anchored in AI investments.
  • Finance And Business: Having a CFO that is knowledgeable about advanced analytics, AI, and ensuring KPI’s are SMART is increasingly a business imperative. PWC has already signaled that perhaps AI functions should report into CFO’s versus CIOs given CFO’s skills are often stronger in more robust measurement and risk management functions. AI in finance has so many applications, whether it’s analyzing credit card transactions to predict your personality profile on shopping habits, predicting which investments are best for you to have given your investment profile, creating customized advice based on the interactions of your experiences, or even predicting fraud and credit risks. Key questions the CEO or board director can reflect on are: Who is the right executive stakeholder to help you advance your business to use AI for strategic value and manage the risk? Who has the legal risk skills to ensure your inventory of AI algorithms are recorded accurately and validated to manage liability risk (bias, misclassifications) and ensure that you are building trusted AI practices? Are you going to bet on your CFO, CIO, or recruit a Chief AI and Data Officer, with specialized skills? Then who will you report this new officer role into? Manulife, a global insurance firm has both a Chief Data Science Officer, a Chief Analytics Officer, a CIO, and Chief Security Officer while a major bank, TD, has a Chief AI Officer, a Chief Data Officer, a CIO, and a Chief Security Officer. I would submit we are still exploring the different organizational roles and structures to get AI, Analytics and Data right. My own view is simplification is the best pathway as Data, Analytics and AI should be an integrated function – and less communication confusion as well as to who is accountable or responsible for different practices.
  • Sustainability Robustness (Environment, Human, And Societal Well Being): Ensuring that CEO’s and board directors are thinking longer-term about the sustainability aspects of AI and the impact AI may have on multiple dimensions is key. Every AI project should go through a sustainability review and reflect on the question: Is AI being used for good or is AI being used in a way that compromises your corporate brand, values, or your vision of your company, society, environment, or fundamentally what it means to be “Human” (i.e. protecting our species) Although these are deep sustainability perspectives, one must recall the late Dr. Stephen Hawking’s 2104 prediction and warning on BBC that: “The development of full artificial intelligence could spell the end of the human race.” Taking a few extra days with diverse stakeholders to deeply reflect on the value and risks of AI from a sustainability lens may well be the greatest achievement(s) that companies can advance in ensuring AI is deployed transparently and responsibly.

The next blog next week will further evolve the AI Brain Trust Framework of Leadership Skills and drill down to explain the Emotional and Social Intelligence skills in relationship to AI.

originally posted on forbes.com by Cindy Gordon

About Author: Dr. Cindy Gordon is a CEO, a thought leader, author, keynote speaker, board director, and advisor to companies and governments striving to modernize their business operations, with advanced AI methods. She is the CEO and Founder of SalesChoice, an AI SaaS company focused on Improving Sales Revenue Inefficiencies and Ending Revenue Uncertainty. A former Accenture, Xerox, and Citicorp executive, she bridges governance, strategy, and operations in her AI contributions. She is a board advisor of the Forbes School of Business and Technology and the AI Forum. She is passionate about modernizing innovation with disruptive technologies (SaaS/Cloud, Smart Apps, AI, IoT, Robots), with 13 books in the market, with the 14th on The AI Split: A Perfect World or a Perfect Storm to be released shortly. Follow her on Linked In or on Twitter or her Website. You can also access her at The AI Directory.