BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//UM//UM*Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Detroit
TZURL:http://tzurl.org/zoneinfo/America/Detroit
X-LIC-LOCATION:America/Detroit
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20070311T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20071104T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260219T150451
DTSTART;TZID=America/Detroit:20260313T100000
DTEND;TZID=America/Detroit:20260313T110000
SUMMARY:Workshop / Seminar:Statistics Department Seminar Series: Maggie Makar\, Assistant Professor\, Computer Science and Engineering\, University of Michigan
DESCRIPTION:Abstract: Machine learning models are often deployed in settings where typical assumptions fail: agents strategically manipulate inputs\, distributions shift\, and sequential decisions are prohibitively high-dimensional. I argue that causal structure provides a principled way to address these challenges. By viewing causal assumptions as structural constraints that restrict the space of plausible data-generating processes\, we can leverage them to obtain more robust and efficient estimators.\nFirst\, I will show how causal reasoning can be used to detect strategic misreporting and gaming in predictive models. The key insight is that\, unlike genuine behavioral adaptation\, misreporting does not causally influence downstream variables. By leveraging this asymmetry\, we obtain identification strategies that distinguish manipulation from legitimate change.\nSecond\, I will demonstrate how exploiting causal structure in reinforcement learning can reduce effective dimensionality and improve statistical efficiency. Structural assumptions induce conditional independencies that constrain the data-generating process\, enabling more stable estimation and sharper sample complexity guarantees.\nFinally\, I will introduce minimally orthogonal causal inference. While classical orthogonalization removes first-order sensitivity to nuisance estimation\, we show that weaker\, targeted orthogonality conditions are often sufficient for valid inference. This perspective leads to simpler estimators and improved finite-sample behavior without sacrificing asymptotic guarantees.
UID:145746-21897773@events.umich.edu
URL:https://events.umich.edu/event/145746
CLASS:PUBLIC
STATUS:CONFIRMED
CATEGORIES:seminar
LOCATION:West Hall - 340
CONTACT:
END:VEVENT
END:VCALENDAR