Presented By: Law & Economics
Law & Economics: Algorithmic Risk Assessment in the Hands of Humans
Megan Stevenson, George Mason University Antonin Scalia Law School
Abstract:
Algorithmic prediction tools have proliferated in modern society. They promise improved decision-making, but contain the threat of entrenching race, gender or class biases. Little is known about how their use affects real-world outcomes. We evaluate the impacts of incorporating algorithmic predictions of future offending (risk assessments) in high stakes decisions: criminal sentencing. Using multiple identification strategies – differences-in-differences, discontinuities-in-time, and discontinuities-in-risk-score – we seek to answer three questions: whether risk assessment affected judges’ decisionmaking (lowering sentences for low-risk defendants relative to higher-risk defendants), whether it affected net outcomes such as incarceration rates, sentence lengths, and recidivism, and whether it affected racial disparities in incarceration. Our setting is Virginia, a state which adopted risk assessment with specific policy goals: lowering carceral sentences for nonviolent offenders and increasing sentences for sex offenders. Overall we find that a) judges do change sentencing decisions in response to the risk assessment, b) this change did not lead to any discernible increase in efficiency, defined as lowering incarceration without affecting public safety or vice versa, c) the specific policy goals were not met (net incarceration rates for nonviolent offenders remained the same, and net incarceration rates for sex offenders decreased) and d) there is suggestive evidence that risk assessment can have an adverse effect on racial disparities in sentencing (among the subset of judicial circuits that responded most to the risk assessment, incarceration rates for black defendants rose by 8 percentage points relative to white defendants).
co-authored with Jennifer Doleac
Algorithmic prediction tools have proliferated in modern society. They promise improved decision-making, but contain the threat of entrenching race, gender or class biases. Little is known about how their use affects real-world outcomes. We evaluate the impacts of incorporating algorithmic predictions of future offending (risk assessments) in high stakes decisions: criminal sentencing. Using multiple identification strategies – differences-in-differences, discontinuities-in-time, and discontinuities-in-risk-score – we seek to answer three questions: whether risk assessment affected judges’ decisionmaking (lowering sentences for low-risk defendants relative to higher-risk defendants), whether it affected net outcomes such as incarceration rates, sentence lengths, and recidivism, and whether it affected racial disparities in incarceration. Our setting is Virginia, a state which adopted risk assessment with specific policy goals: lowering carceral sentences for nonviolent offenders and increasing sentences for sex offenders. Overall we find that a) judges do change sentencing decisions in response to the risk assessment, b) this change did not lead to any discernible increase in efficiency, defined as lowering incarceration without affecting public safety or vice versa, c) the specific policy goals were not met (net incarceration rates for nonviolent offenders remained the same, and net incarceration rates for sex offenders decreased) and d) there is suggestive evidence that risk assessment can have an adverse effect on racial disparities in sentencing (among the subset of judicial circuits that responded most to the risk assessment, incarceration rates for black defendants rose by 8 percentage points relative to white defendants).
co-authored with Jennifer Doleac
Co-Sponsored By
Explore Similar Events
-
Loading Similar Events...