Presented By: Department of Statistics
Statistics Department Seminar Series: Emily Diana, PhD Candidate, Department of Statistics and Data Science, Wharton School, University of Pennsylvania
"Addressing Algorithmic Bias and Disclosiveness: Minimax Group Fairness and Multiaccurate Proxies for Redacted Features"
Abstract: While data science enables rapid societal advancement, deferring decisions to machines does not automatically avoid egregious equity or privacy violations. Without safeguards in the scientific process --- from data collection to algorithm design to model deployment --- machine learning models can easily inherit or amplify existing biases and vulnerabilities present in society. My research focuses on explicitly encoding algorithms with ethical norms and constructing frameworks ensuring that statistics and machine learning methods are deployed in a socially responsible manner. In particular, I develop theoretically rigorous and empirically verified algorithms to mitigate automated bias and protect individual privacy.
I will highlight this work through two main contributions. In the first, I discuss a new oracle-efficient and convergent algorithm to provably achieve minimax group fairness -- fairness measured by worst-case outcomes across groups -- in general settings. In the second, I illustrate a framework for producing a sensitive attribute proxy that allows one to train a fair model even when the original sensitive features are redacted or unavailable.
Full text versions of the two papers can be found at https://dl.acm.org/doi/10.1145/3461702.3462523 (“Minimax Group Fairness: Algorithms and Experiments”) and
https://dl.acm.org/doi/10.1145/3531146.3533180 (“Multiaccurate Proxies for Downstream Fairness”).
I will highlight this work through two main contributions. In the first, I discuss a new oracle-efficient and convergent algorithm to provably achieve minimax group fairness -- fairness measured by worst-case outcomes across groups -- in general settings. In the second, I illustrate a framework for producing a sensitive attribute proxy that allows one to train a fair model even when the original sensitive features are redacted or unavailable.
Full text versions of the two papers can be found at https://dl.acm.org/doi/10.1145/3461702.3462523 (“Minimax Group Fairness: Algorithms and Experiments”) and
https://dl.acm.org/doi/10.1145/3531146.3533180 (“Multiaccurate Proxies for Downstream Fairness”).
Related Links
Co-Sponsored By
Explore Similar Events
-
Loading Similar Events...