Presented By: The Center for the Study of Complex Systems
CSCS Seminar: Can Governance be Reconciled with Uncertainty in Machine Learning? Challenges and Opportunities concerning Accountability and Variance
A. Feder Cooper, PhD Candidate Computer Science, Cornell University
This is a THURSDAY seminar
Abstract: Artificial intelligence (AI) and machine learning (ML) researchers are confronted daily with the reality that our field has become a stand-in in popular discourse for a variety of public anxieties, political debates, and metaphysical questions about human nature and intelligence. Among such weighty topics, it can be easy to neglect the importance of low-level engineering decisions and infrastructure in AI/ML technology — the realities of implementing algorithms in code, deploying systems at scale, reckoning with computational resource constraints, and numerous other empirical concerns that complicate theory (both statistical and legal) in practice.
This talk will explore how variance introduces arbitrariness into AI/ML, which in turn complicates system reliability and concrete, actionable notions of accountability. While the details of variance may seem mundane in comparison to debates about the essence of intelligence, they are in fact responsible for powering the technology — intelligent or not — that is reshaping the contours of fundamental rights and institutions. This talk will clarify these connections by examining how variance is central to the function of AI/ML systems, and moreover, is inextricable from how these systems reproduce existing harms, such as racial discrimination, and bring about emergent behaviors that create novel problems for due process in the law.
Bio:
A. Feder Cooper is a Ph.D. candidate in Computer Science at Cornell University and Rising Star in EECS (MIT, 2021), working at the interface of uncertainty, reliability, accountability, and ethics in computing. Cooper researches empirically motivated, theoretically grounded problems in Bayesian inference, model selection, and deep learning, and has published numerous papers at top AI/ML conferences (e.g., NeurIPS and AISTATS). In bringing this work to bear on tech policy and ethics, Cooper engages methods from the law and social sciences, and has had work featured in interdisciplinary computing venues (e.g., FAccT) and tech law journals (e.g., Colorado Tech Law Journal). Much of this work has been recognized with spotlight and contributed talk awards.
Abstract: Artificial intelligence (AI) and machine learning (ML) researchers are confronted daily with the reality that our field has become a stand-in in popular discourse for a variety of public anxieties, political debates, and metaphysical questions about human nature and intelligence. Among such weighty topics, it can be easy to neglect the importance of low-level engineering decisions and infrastructure in AI/ML technology — the realities of implementing algorithms in code, deploying systems at scale, reckoning with computational resource constraints, and numerous other empirical concerns that complicate theory (both statistical and legal) in practice.
This talk will explore how variance introduces arbitrariness into AI/ML, which in turn complicates system reliability and concrete, actionable notions of accountability. While the details of variance may seem mundane in comparison to debates about the essence of intelligence, they are in fact responsible for powering the technology — intelligent or not — that is reshaping the contours of fundamental rights and institutions. This talk will clarify these connections by examining how variance is central to the function of AI/ML systems, and moreover, is inextricable from how these systems reproduce existing harms, such as racial discrimination, and bring about emergent behaviors that create novel problems for due process in the law.
Bio:
A. Feder Cooper is a Ph.D. candidate in Computer Science at Cornell University and Rising Star in EECS (MIT, 2021), working at the interface of uncertainty, reliability, accountability, and ethics in computing. Cooper researches empirically motivated, theoretically grounded problems in Bayesian inference, model selection, and deep learning, and has published numerous papers at top AI/ML conferences (e.g., NeurIPS and AISTATS). In bringing this work to bear on tech policy and ethics, Cooper engages methods from the law and social sciences, and has had work featured in interdisciplinary computing venues (e.g., FAccT) and tech law journals (e.g., Colorado Tech Law Journal). Much of this work has been recognized with spotlight and contributed talk awards.
Explore Similar Events
-
Loading Similar Events...