Sharad Goel
This talk will be hybrid in-person and remote.
Machine learning algorithms are now used to automate routine tasks and to guide high-stakes decisions, but, if not carefully designed, they can exacerbate inequities. I’ll start by describing an evaluation of automated speech recognition (ASR) tools, which power popular virtual assistants, facilitate automated closed captioning, and enable digital dictation platforms for health care. We find that five state-of-the-art ASR systems -- developed by Amazon, Apple, Google, IBM, and Microsoft -- exhibited substantial racial disparities, making twice as many errors for Black speakers compared to white speakers, a gap we trace back to a lack of diversity in the audio data used to train the models. I'll then describe recent attempts to mathematically formalize fairness. I'll argue that some of the most popular definitions, when used as a design principle, can, perversely, harm the very groups they were created to protect. I'll conclude by describing a general, consequentialist paradigm for designing equitable algorithms that aims to mitigate the limitations of the dominant approaches to building fair machine learning systems.
Want to be notified about upcoming NetSI events? Sign up for our email list below!
Thank you! You have been added to our email list.
Oops! Something went wrong while submitting the form