Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
Black-box machine learning models are seeing increasing deployment in safety-critical settings, such as in autonomous vehicles and healthcare settings. This coupling increases the need to have reliable uncertainty quantification. Traditional methods for such estimation, however, require distributional assumptions that are incompatible with these modern black-box estimators. In their place, post-hoc, distribution-free methods of uncertainty quantification have arisen. Among these is ``conformal prediction.'' At its core, conformal prediction performs uncertainty quantification by replacing model point predictions with ``prediction regions,'' subsets of the output space whose shape and size are defined to guarantee coverage of the truth with some user-specified probability.

Despite such guarantees, these implicitly defined predictions regions do not directly lend themselves to practical use; while researchers professed their supposed utility, their downstream use was not immediately obvious. In this thesis, we propose and develop one such use: model-based decision-making. We demonstrate that conformal prediction can be integrated into a variety of decision-making pipelines, from single-step predict-then-optimize problems to model-based LQR control, and consequently enable guarantees on suboptimality that otherwise cannot be established.

We develop this conformal decision-making framework over three works. In the first, we focus on the development of conformal prediction in the space of scientific inquiry: here, decisions are often framed as hypothesis testing of parameter values. Increasingly common in certain domains, such as astrophysics and neuroscience, is the use approximate variational inference to do such parameter estimation, due to the large scale at which such estimation is to be performed. Amortized variational inference produces a posterior approximation that can be rapidly computed given any new observation. Unfortunately, there are few guarantees about the quality of these approximate posteriors. We propose Conformalized Amortized Neural Variational Inference (CANVI), a procedure that is scalable, easily implemented, and provides guaranteed marginal coverage. Given a collection of candidate amortized posterior approximators, CANVI constructs conformalized predictors based on each candidate, compares the predictors using a metric known as predictive efficiency, and returns the most efficient predictor.

In the next work, we generalize the setting for such robust decision-making, expanding from scientific parameter testing to a more general space of ``predict-then-optimize'' problems. As in standard decision-making formulations, these problems frame the decision-making task as a parametric optimization problem. The unique aspect here, however, is that the parameters of the problem are not revealed to the decision-maker. As a result, the decision-maker is forced to estimate the unknown parameters and optimize their decision against this surrogate objective, hence the given name: the estimation is performed by ``predicting'' the parameters with an upstream model. In the nominal approach, the parameters predicted by the upstream model are assumed to precisely coincide with the true, unknown parameters; this approach of specification, however, fails to have any formal guarantees on the resulting decision. Towards this end, we develop a robust analog of this nominal problem formulation, called ``Conformal Predict-Then-Optimize'' (CPO), from which suboptimality guarantees can be established. We then demonstrate how a simple, residual-based score results in overly conservative decision-making and propose an alternative score that produces structured, non-convex prediction regions and, in turn, more informative decisions.

Finally, we demonstrate the generality of the proposed conformal predict-then-optimize decision-making framework. In particular, we demonstrate that CPO can be extended to a recently proposed extension to conformal prediction in which the scalar score function is replaced with an analogous vector score and the quantile threshold by a quantile envelope. We similarly demonstrate that CPO naturally lends itself to extension to model-based robust control applications. We, thus, develop extensions across these two applications and then demonstrate the consistent empirical improvements produced in each.

Explore Similar Events

  •  Loading Similar Events...

Keywords


Back to Main Content