Data Biases and Algorithmic Fairness
Visiting Speaker
Past Event
Kristina Lerman
USC/ISI
Friday
Dec 13, 2019
Watch video
11:00 am
177 Huntington Ave
177 Huntington Ave 2nd floor conference room
11th floor

Social data is often generated by heterogeneous subgroups, each with its own traits and behaviors. The correlations between the traits, behaviors, and even how the data is collected can create subtle biases. Models trained on biased data will make invalid inferences about individuals – what’s known as ecological fallacy. The inferences can also be unfair and discriminate against individuals based on their membership in protected groups. I describe common sources of bias in heterogeneous data, including Simpson’s paradox, survivor bias, and longitudinal data fallacy. I describe a mathematical framework for de-biasing data that addresses these threats to validity of predictive models. The framework creates covariates that do not depend on sensitive features, such as gender or race, and can be used with any model to create fairer, unbiased predictions. The framework promises to learn unbiased models even in analytically challenging data environments.

About the speaker
Kristina Lerman is a Principal Scientist at the University of Southern California Information Sciences Institute and holds a joint appointment as a Research Associate Professor in the USC Computer Science Department. Trained as a physicist, she now applies network analysis and machine learning to problems in computational social science, including crowdsourcing, social network and social media analysis. Her recent work on modeling and understanding cognitive biases in social networks has been covered by the Washington Post, Wall Street Journal, and MIT Tech Review.

No upcoming events

View all