Practices Of Ethical AI – Part 1: AI Failures And Lessons Learned

Practices Of Ethical AI - Part 1: AI Failures And Lessons Learned
Practices Of Ethical AI – Part 1: AI Failures And Lessons Learned

Earlier in the year, our new book was released on Amazon, called the AI Dilemma, and it discussed the impacts of AI on different industries, ranging from: financial services, government, healthcare, media and technology, manufacturing, retail, to name just a few.

As we review the AI ethics landscape, there have been many developments, some reinforcing that we do not have sufficient guardrails in place yet, while other developments on ethical AI policy fronts giving us increased confidence that the right approaches on AI risks are seriously being considered.

This blog reviews aspects of AI Ethics from A Perfect Storm lens and highlights lessons learned from AI failures. The second blog discusses the positive AI developments from recent progressive policy and legislations advancing Ethical AI practices.

A Perfect Storm: AI Failures and Lessons Learned

Microsoft’s Bot Tay – the bot Tay debacle that came up with racists remarks within 24 hours of its interaction with people -it’s an important learning on continued risks of AI bots using ML generalizations from large amounts of data. Microsoft trained Tay’s algorithm on public data along with material provided by professional comedians to increase language literacy for the bot. The plan was to enable Tay to identify patterns through its interactions to help sound more natural. However, when Microsoft released Tay to the public on Twitter. At first, Tay only bantered and had lame jokes. But after a short window, Tay started tweeting offensive things like: “I f@#%&*# hate feminists and they should all die and burn in hell” or “Bush did 9/11 and Hitler would have done a better job…” Within 16 hours of release, Tay had tweeted more than 95,000 times, and most of the messages were offensive. Twitter users started registering major complaints and outrage, and Microsoft had little choice but to bury Tay. A hard lesson for Microsoft and a foreshadowing of bots that create more harm than good.

Leadership Action: Improve release controls and deploy more extensive testing and QA controls. Think of your worse case scenarios – edge cases and plan to mitigate them – as they could be your reality in less than 24 hours.

Amazon AI Recruitment Tool Gone Awry

Amazon built a recruiting app that unfortunately did not rate software developer candidates and other technical posts in a gender-neutral way. This was because the data set (resumes) collected over a ten year period were primarily a male data set from the tech sector. Hence the AI model learned that male candidates were a better choice, and downgraded women’s applications. Given the results were so skewed, that Amazon disbanded this project.

Market Impact: Fifty-five percent of U.S. human resources managers reported AI, is already a regular part of of HR recruiting (SourceL CareerBuilder) and the Artificial Intelligence in Recruitment Market size was valued at $580 million in 2019, estimated to grow at a CAGR of 6.76% during 2020-2025. The growing demand for predictive analytics as an important part of the recruitment process is enhancing the adoption of artificial intelligence technology. However, the gaps in AI data bias against women continues to be an area where board directors and CEOs must be far more vigilant to avoid creating more perfect storms.

Leadership Action: Dig deep into the logic of the data set design and have third party experts validate risks on recruiting AI vendors. Leverage third-party Data Bias Tool Kits (ie: Microsoft: FairLearn, IBM- Fair 360 Bias, Fair ML, Google, REVISE, etc.)

US Court COMPAS Algorithm Bias

attributed higher risk of recidivism to black defendants compared to white defendants. COMPAS stands for (Correctional Offender Management Profiling for Alternative Sanctions takes into account factors such as : employment, age, arrests, and provides a score. Unfortunately, Black defendants were incorrectly labeled as “high-risk” to commit a future crime twice as often as their white counterparts. The use of AI algorithms in predictive policing is perpetuating harsher sentencing of Black communities. They are still being used in 46 states in the US. In this use case, AI is doing is doing more harm, and accelerating old world views and further generating more perfect storms not advancing judicial reforms. More Case Info Here.

Leadership Action: Stronger Data Bias Data Set Design Monitoring and Control System and Third Party Audits before production roll outs.

Conclusion:
The next blog in this two part series will discuss positive developments of AI and Ethics creating a better world or a more perfect world.

originally posted on forbes.com by Cindy Gordon

About Author: Dr. Cindy Gordon is a CEO, a thought leader, author, keynote speaker, board director, and advisor to companies and governments striving to modernize their business operations, with advanced AI methods. She is the CEO and Founder of SalesChoice, an AI SaaS company focused on Ending Revenue Uncertainty and brining more Humanity to Sales to avoid attention deficit disorder using AI and Cognitive Sciences. A former Accenture, Xerox and Citicorp executive, she bridges governance, strategy and operations in her AI initiatives. She is also a board advisor of the Forbes School of Business and Technology, and the AI Forum. She is passionate about modernizing innovation with disruptive technologies (SaaS/Cloud, Smart Apps, AI, IoT, Robots and Cobots), with 14 books in the market, with The AI Dilemma just released.