Presented By: Department of Statistics
Statistics Department Seminar Series: Josh Loftus, Assistant Professor, Statistics and Data Science, London School of Economics
"Causal interpretability for human-centered data science"
Abstract: Tools for interpretable machine learning or explainable artificial intelligence can be used to audit algorithms for fairness or other desired properties. In a "black-box" setting--one without access to the algorithm's internal structure--an auditor can only use model-agnostic methods based on varying inputs while observing differences in outputs. These include popular interpretability tools like Shapley values and Partial Dependence Plots. But such methods have important limitations that can impact audits with consequences for outcomes such as fairness. In high-stakes applications, it may be worth the effort to use tools that can incorporate background information and be tailored for specific use-cases. We introduce promising ways to do this using the mathematics of causality, with Causal Dependence Plots serving as an example. Causal interpretability illustrates a broader research agenda of a more human-centered data science, empowering people with tools to consciously guide the directions of research and technology for our purposes.
https://joshualoftus.com/
https://joshualoftus.com/