While completing her Master’s Degree at MIT, Joy Buolamwini embarked on an art project with hopes of building the Aspire Mirror, a device that would allow users to project digital masks onto their reflections. But when she began experimenting with the basic facial recognition software needed to program the machine, she discovered something rather shocking: it couldn’t detect her face.
Puzzled by this phenomenon, Buolamwini asked some friends and colleagues try the software for themselves, and without fail, their faces were recognized each time. The graduate student couldn’t help but think it was her dark skin that kept the software from picking up on her face, so she reached for a white mask. Suddenly, the issue vanished, and her presence was immediately detected.
“I wondered if this was just my face or if there were deeper things at play,” Buolamwini recalls. The experience led her to a deeper investigation of skin type and gender bias in commercial artificial intelligence (AI), which eventually became her thesis at MIT. She found that the systems worked better on men’s faces than on women’s and better on lighter-skinned faces than on darker-skinned, but she also took it one step further, by conducting an intersectional analysis.
“What I found was that the error rates were no more than 1% for lighter-skinned men but over 30% for darker-skinned women,” Buolamwini says. “So, doing the deeper dive made me realize that this wasn’t just my one-off experience but that there was a larger pattern, and it became an urgent problem because I started seeing how AI was being used in more and more areas of our lives.”
Indeed, usage of the technology rose more than 270% between 2015 and 2019, and it has continued to infiltrate nearly every market in the years since, not least of which is beauty. For years now, AI has been quietly disrupting the industry, but what Buolamwini found was that the adoption of this software in beauty has dramatically inhibited women of color – and it goes far beyond the system’s inability to recognize their faces.
“We’ve had so much progress with civil rights, racial justice, disability justice, gender equality, so much we fought for for so long,” she points out. “But you can then just put it behind a machine and have these biased outcomes that actually harm people’s day-to-day lives.”
Buolamwini and her contemporaries have conducted extensive research on search results, which routinely show racial bias against women and girls of color. “So, from the very beginning of trying to find out which beauty looks are trending or what’s in your skincare routine, you can encounter this kind of bias that shows a Eurocentric standard of beauty and therefore suggests that you’re not beautiful if you don’t look like this,” she explains.
And the seemingly never-ending filters littered across social media platforms can similarly encompass racial bias. “Many of these slim your nose or lighten your skin and essentially don’t reflect the full spectrum of beauty,” Buolamwini says. “And part of that can be informed by the data that’s being used to train these systems in the first place.”
According to her and others’ findings, AI is more often than not a product of those creating it. “Despite our aspirations for tech to be better than us, to be more objective than we are, the machines we create are a reflection of both our aspirations and our limitations,” the algorithmic justice expert notes. “So, we really have to look at who’s creating the technology that shapes society and whether those creators reflect society, and oftentimes, the answer is no.”
Reportedly, less than 2% of the technical workforce at major companies are Black, and less than a fifth are women, leaving much to be desired by way of diversity. “You have this situation where there’s a largely homogeneous group of people creating technology that’s supposed to work for all,” Buolamwini says.
It’s with this deep-seated problem in mind that Olay, long a proponent of fusing beauty and social impact, reached out to Buolamwini, who, since her MIT days, has founded the Algorithmic Justice League, given a viral Ted Talk, and served as the subject of Netflix’s “Coded Bias” documentary. In its latest campaign, #DecodetheBias, the beauty and self-care brand seeks to raise awareness of the many biases hidden within AI and to put an end to them once and for all.
In addition to a national TV spot and print campaign, which features Buolamwini and her work, Olay is taking steps to diversify who codes and in turn creates the technologies at the center of this conversation. As an immediate action, it will send 1,000 girls to Black Girls CODE camp to inspire them to explore a career in STEM, and anyone who uses the #DecodetheBias hashtag on Instagram or Twitter can help send up to 200 more.
But as Buolamwini says, “it’s not just changing who’s in the room but also how we’re creating the technology. So, we need to diversify who’s in tech, but we also need to do internal checks to see how we’re doing and make changes as needed.” Olay’s new campaign, therefore, also includes an audit of the Olay Skin Advisor, a web-based tool that uses a selfie to analyze skin and recommend skincare products.
Conducted by ORCAA (O’Neil Risk Consulting & Algorithmic Auditing), a partner of the Algorithmic Justice League, the audit accessed the tool’s AI and identified points of bias. It then recommended steps for remediation with Buolamwini, herself, offering her expertise throughout. “Like almost every other system, there was bias, namely age and skin type bias, and we then looked at ways these biases could be overcome,’ she explains. “And in this particular case, having consented data of the groups the AI didn’t work as well on is one of the steps that can be taken.”
In fact, much of what drew the coded bias researcher to partner with Olay was the brand’s allegiance to customer consent, so it was crucial that consent play a role in the audit and its recommended mitigation. “There are many reasons why you wouldn’t want your images scanned in the first place, from surveillance to data breaches that expose sensitive information to nefarious actors,” Buolamwini says. “So, it’s not just a question of can we do this kind of analysis or can we collect more data to improve results; instead, it’s a question of can we give agency to people to have a voice and a choice to say yes or no to whether their data is used in the first place.”
Olay will continue to conduct recurring internal audits, but it hopes to also set an example for others in the beauty industry to look at the algorithms behind their own AI applications and help usher in a wave of change. “Opportunities are being governed by algorithmic gatekeepers, and oftentimes, these gatekeepers are choking opportunity on the basis of race, skin type, and gender,” Buolamwini says. “So, I think that this campaign is extremely important not just for Olay and not just for the beauty industry but really any industry that uses AI and data.”
originally posted on forbes.com by Gabby Shacknai
Author’s Statement: I’m a New York-based journalist who covers beauty and wellness, food and travel, and lifestyle. My work has appeared in Fortune, ELLE, Departures, Air Mail, Travel Leisure, and Women’s Health, among other outlets, and I have a Master’s Degree in Journalism from Columbia University and a Master’s Degree in English from the University of Edinburgh. I have been lucky enough to travel across the world, meet the changemakers and rulebreakers of various industries, and get an inside look at the trends that define our era, and I aim to share that knowledge with my readers. Confronted by a growing influx of information and content, I know how challenging it can be to find voices you can trust in this day-and-age. I believe it’s more important than ever to produce reliable stories that are backed by my own experience and the expertise of my sources, and, whether writing about a new beauty movement or profiling a fitness-world disruptor, I strive to do just that.