|Talks|

Data Biases and Algorithmic Fairness

Visiting speaker
Past Talk
Kristina Lerman
USC/ISI
Dec 13, 2019
11:00 am
Dec 13, 2019
11:00 am
In-person
4 Thomas More St
London E1W 1YW, UK
The Roux Institute
Room
100 Fore Street
Portland, ME 04101
Network Science Institute
2nd floor
Network Science Institute
11th floor
177 Huntington Ave
Boston, MA 02115
Network Science Institute
2nd floor
Room
58 St Katharine's Way
London E1W 1LP, UK

Talk recording

Social data is often generated by heterogeneous subgroups, each with its own traits and behaviors. The correlations between the traits, behaviors, and even how the data is collected can create subtle biases. Models trained on biased data will make invalid inferences about individuals – what’s known as ecological fallacy. The inferences can also be unfair and discriminate against individuals based on their membership in protected groups. I describe common sources of bias in heterogeneous data, including Simpson’s paradox, survivor bias, and longitudinal data fallacy. I describe a mathematical framework for de-biasing data that addresses these threats to validity of predictive models. The framework creates covariates that do not depend on sensitive features, such as gender or race, and can be used with any model to create fairer, unbiased predictions. The framework promises to learn unbiased models even in analytically challenging data environments.

About the speaker
Kristina Lerman is a Principal Scientist at the University of Southern California Information Sciences Institute and holds a joint appointment as a Research Associate Professor in the USC Computer Science Department. Trained as a physicist, she now applies network analysis and machine learning to problems in computational social science, including crowdsourcing, social network and social media analysis. Her recent work on modeling and understanding cognitive biases in social networks has been covered by the Washington Post, Wall Street Journal, and MIT Tech Review.
Share this page:
Dec 13, 2019