Site icon Snowdrop Solution

Implications Of The Biases That Have Shaped Society In The Past (Or Shape It Today), On How AI Systems Work – Or Fail To Work Followed By Q&A

Implications Of The Biases That Have Shaped Society In The Past (Or Shape It Today), On How AI Systems Work - Or Fail To Work Followed By Q&A
Implications Of The Biases That Have Shaped Society In The Past (Or Shape It Today), On How AI Systems Work – Or Fail To Work Followed By Q&A

The datasets on which AI technologies are trained to carry out a task reflect society, and can contain the biases that were embedded in processes, relationships, or structures at the point of data collection. When this data is used to develop AI, the resulting systems reflect back the social and cultural structures or practices of the past; this means the biases that have shaped society in the past (or shape it today) can form the basis of predictions or recommendations about future action.

This has implications for how AI systems work – or fail to work – for different communities or users. Examples of such failures in recent years have included: women being less likely to be shown adverts for highly paid jobs, companies not making deliveries to poorer neighbourhoods, and racial disparities in approaches to policing or treatment by the justice system.

New research is developing ways of managing bias in data, for example by removing sensitive information – sometimes referred to as ‘scrubbing to neutral’ – before that data is used to develop AI systems. However, many of these current attempts to remove bias from AI are very narrow, and remain difficult to apply in some of the areas where fairness matters most, which are typically some of the most complex policy areas.

Questions about how to build fair algorithms are the subject of increasing interest in technical communities and ideas about how to create technical ‘fixes’ to tackle issues of fairness are evolving, but fairness remains a challenging issue. Notions of fairness can relate to groups, and whether different social groups are treated equally or experience similar outcomes, or individual expectations about personal outcomes. People think about fairness in different ways, drawing from ideas about equality of treatment or opportunity, or perceptions of what is just or right.

Technical fixes alone cannot answer these bigger questions about what fairness is or the type of social outcomes societies want AI systems to help create. They require an understanding of the broader forces that influence how and where AI systems are put to use, and the social forces that shape who designs AI systems and for whose benefit.

Thinking about the politics of AI in this way becomes especially important when examining the broader social and economic ramifications of these technologies, such as the impact of AI on work.

About Speaker: Cynthia Dwork is renowned for placing privacy-preserving data analysis on a mathematically rigorous foundation. A cornerstone of this work is differential privacy, a strong privacy guarantee, frequently permitting highly accurate data analysis. Dr. Dwork has also made seminal contributions in cryptography and distributed computing, and is the recipient of two “test-of-time” awards. She is a member of the US National Academy of Sciences, the US National Academy of Engineering, and the American Philosophical Society, and a Fellow of the American Academy of Arts and Sciences. Dwork, previously Distinguished Scientist at Microsoft Research, joined Harvard in January 2017 as the Gordon McKay Professor of Computer Science at the Paulson School of Engineering and Applied Sciences, Radcliffe Alumnae Professor at the Radcliffe Institute for Advanced Study, and an Affiliated Faculty Member at Harvard Law School.

About Series: You and AI Conversations about AI technologies and their implications for society. Artificial Intelligence (AI) is the science of making computer systems smart, and an umbrella term for a range of technologies that carry out functions that typically require intelligence in humans. AI technologies already support many everyday products and services, and the power and reach of these technologies are advancing at a pace.

The Royal Society is working to support an environment of careful stewardship of AI technologies, so that their benefits can be brought into being safely and rapidly, and shared across society. In support of this aim, the Society’s You and AI series brought together leading AI researchers to contribute to a public conversation about advances in AI and their implications for society.

Exit mobile version