AI Leaders Need To Know Both The Opportunities And Dangers That Come With Artificial Intelligence And Algorithms

AI Leaders Need To Know Both The Opportunities And Dangers That Come With Artificial Intelligence And Algorithms
AI Leaders Need To Know Both The Opportunities And Dangers That Come With Artificial Intelligence And Algorithms

Part of the power of AI and deep learning is that AI training can indiscriminately learn things we don’t explicitly instruct it to learn.  Unfortunately, it can pick up on trends that we would rather it not – such as the inherent gender bias in our use of language. Companies need to remain vigilant to keep bias out of their AI systems. They need to incorporate anti-bias training alongside their AI and ML training, spot potential for bias in what they’re doing, and actively correct for it. In addition to the usual Q&A processes for software, AI needs to undergo an additional layer of social Q&A so that problems can be caught before they reach the consumer and result in a massive backlash. Understanding these dangers is the responsibility of not just those leading AI initiatives, but all executives. A PR leader who understands social media dynamics and the vicious troll culture could avert the dangers of a self-learning AI Twitter bot. An executive steeped in HR and employment discrimination law can help flag potential dangers of resume screening bots. And a manager with operating experience across multiple countries might be able to spot the sensitivity around translating genderless pronouns. The institutional know-how to spot the dangers of AI is already in your company – you just need to unleash it.

Executives lured by the siren-song of AI need to understand both the possibilities and risks endemic in AI and data. Even at the dawn of humans interacting with AI through mediums like voice and chat, there are many documented failures of AI attempting to speak and understand human language. Here, we’ll highlight three recent, high-profile examples from Microsoft, Google, and Amazon, and show how AI leaders can learn from these mistakes to implement programs that safeguard their AI initiatives.

(Mis)Learning Teenage Slang

In March 2016, Microsoft developed a Twitter chat bot using AI named Tay, built by mining public conversations. Billed as a “conversational” AI, the bot aimed to test the bounds of conversational language: The firm described Tay as “AI fam from the internet that’s got zero chill!” The bot would also learn and adapt based on its conversations on Twitter.

Unfortunately, it took less than 24 hours for internet trolls to train Tay to spew out horribly racist, misogynist, and generally offensive tweets. What began as a fun chat bot designed to engage in “casual and playful” conversation morphed into a PR nightmare.

Tone-Deaf AI Reminders

Facebook has a feature called “Memories”, highlighting to users what happened on this date in previous years.  It can remind people of memorable vacations, friends’ weddings, or other joyful occasions. However, it can also remind people of painful memories, like the anniversary of the death of a family member, or it might ask you to say happy birthday to a deceased friend.

In April 2019, Facebook announced that they would use artificial intelligence to screen out such tactless reminders.  Unfortunately, this is a notoriously difficult task that the social media giant has failed at before.  In November of 2016, its memorialization feature mistakenly thought many living users were dead, including one of my friends.  There was some speculation that this disproportionately affected campaign supporters and staffers of presidential candidate Hilary Clinton, who received widespread condolences upon her defeat that were misinterpreted by Facebook’s algorithms.

Translating Misogyny

Google Translate uses AI and deep learning to crunch through terabytes of text data to provide automated translation service for dozens of languages.

But in November 2017, it was reported that its AI algorithms were sexist. For example, in Turkish, there is a single third-person pronoun “o” which does not mark for gender; in English, we typically use either “he” or “she,” depending on the gender. When translating Turkish to English, the translation algorithm would decide a gender when translating the gender-neutral “o,” producing sexist translations like “he is a doctor,” “she is a nurse,” “he is hard working” or “she is lazy.” The problem goes beyond translating from Turkish – many other languages mark gender differently than English – highlighting the complexity of human language. While Google quickly fixed the problem once aware of it, the incident was an embarrassment to the tech giant.

Sexist Hiring

Inundated with millions of resumes, Amazon reportedly tried to develop AI that could screen potential applicants. The AI algorithm was trained on resumes the company had received, looking at patterns in the resumes of previous successful hires and applying those characteristics to new applicants.

Unfortunately, the algorithm reinforced the biases of hiring for male-dominated roles like software engineering. It saw an existing pattern and trained on it. The algorithm taught itself that resumes that included phrases like “Society for Women Scientists” were less preferred because they contained the word “women.” In October 2018, the company scrapped the project, according to Reuters.

What Can We Learn From AI Failures?

These examples teach us one key risk of AI. First, we must realize that AI risks are business risks. Part of the power of AI and deep learning is that AI training can indiscriminately learn all nuances of language, even if we don’t explicitly instruct it to do so.  Unfortunately, it can pick up trends that we would rather it not – such as the inherent gender bias in our use of language. This is part of a larger concern with AI – that it reinforces biases and stereotypes we may inherently have, but don’t even realize.

It’s essential to keep in mind that AI is not fundamentally biased. As we’ve seen, the bias in these algorithms are the result of biased training data built by humans. It’s not the fundamental technology that’s racist or sexist, but the data on which we train the algorithms. The solution can’t simply be to collect unbiased data, sadly – almost all human data is fundamentally biased in some way.

Companies need to remain vigilant to keep bias out of their AI systems. They need to incorporate anti-bias training alongside their AI and ML training, spot potential for bias in what they’re doing, and actively correct for it. And along with our usual Q&A processes for software, AI needs to undergo an additional layer of social Q&A so that problems can be caught before they reach the consumer and result in a massive backlash. Additionally, the data scientists and AI engineers training the models need to take courses on the risks of AI.

Most importantly, business leaders need specialized AI training to understand both the possibilities and the risks. This is not technical – executives don’t need to be hands-on practitioners – but they do need to understand data science and AI enough to manage AI products and services. Business leaders need to understand the potential for AI to transform business for the better, as well as its potential shortcomings – and dangers.

Understanding these dangers is the responsibility of not just those leading AI initiatives, but all executives. A PR leader who understands social media dynamics and the vicious troll culture could have averted the dangers of a self-learning AI Twitter bot. An executive steeped in HR and employment discrimination law can help flag potential dangers of resume screening bots. And a manager with operating experience across multiple countries might be able to spot the sensitivity around translating genderless pronouns.

The risks of AI can come from any aspect of business, and no single manager has the context to spot everything. Rather, in a world in which AI is permeating everything, companies need to train all their business leaders on AI’s potential and risks, so that every line of business can spot opportunities and flag concerns. The institutional know-how to spot the dangers of AI is already in your company – you just need to unleash it.

originally posted on hbr.org by Michael Li

About Author: Michael Li is the founder and CEO of The Data Incubator, a data science training and placement firm, which was acquired by Pragmatic Institute, where he is president. A data scientist, he has worked at Google, Foursquare, and Andreessen Horowitz. He is a regular contributor to VentureBeat, The Next Web, and Harvard Business Review. He earned a master’s degree from Cambridge and a PhD from Princeton.