Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where

Presented By: Department of Statistics Dissertation Defenses

Principled Evaluation of Large Language Models: A Statistical Perspective

Felipe Maia Polo

The rapid progress of large language models has outpaced the development of principled methodologies for their evaluation. This dissertation draws on ideas from psychometrics and statistics to build rigorous, efficient, and interpretable evaluation frameworks for modern AI systems. In this talk, I focus on three contributions that address complementary challenges in LLM evaluation.

First, I present PromptEval, a method that confronts the problem of prompt sensitivity — the phenomenon whereby minor rephrasing of benchmark questions can substantially alter measured model performance. By combining Item Response Theory with matrix completion, PromptEval efficiently approximates the full distribution of model performance across hundreds of prompt variations while requiring less than 5% of the total evaluations, replacing arbitrary single-prompt assessments with statistically robust characterizations of model behavior.

Second, I introduce skill-based scaling laws that model LLM performance through latent capabilities such as reasoning and instruction-following. Inspired by factor analysis, this approach exploits the correlation structure among benchmark tasks to produce scaling predictions that are both more accurate and more interpretable than existing laws, which typically focus on aggregate validation loss and fail to generalize across model families.

Third, I present Bridge, a unified statistical framework that explicitly connects LLM-as-a-Judge evaluations to human assessments. Bridge models the systematic discrepancies between human and LLM judgments through a latent preference score and a linear transformation of divergence-capturing covariates, enabling principled recalibration of automated scores and formal statistical testing for human–LLM gaps.

Together, these contributions advance a vision of AI evaluation as a scientific discipline in its own right — one that demands the same statistical care we expect from the systems being evaluated.

Explore Similar Events

  •  Loading Similar Events...

Keywords


Back to Main Content