Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Industrial & Operations Engineering

SEMINAR: "Sparse Estimation: Closing the Gap Between L0 and L1 Models" — Alper Atamturk

Departmental Seminar (899) Departmental Seminar (899)
Departmental Seminar (899)
The Departmental Seminar Series is open to all. U-M Industrial and Operations Engineering graduate students and faculty are especially encouraged to attend.

Title:
Sparse Estimation: Closing the Gap Between L0 and L1 Models

Abstract:
Sparse statistical estimators are increasingly prevalent due to their ease of interpretability and superior out-of-sample performance. However, sparse estimation problems with an L0 constraint, restricting the support of the estimators, are challenging (typically NP-hard, but not always) non-convex optimization problems. Consequently, academics and practitioners commonly turn to convex L1 proxies, such as Lasso and its variants, as a remedy. Although the L1 models are solved fast, they may lead to biased and/or dense estimators and require substantial cross-validation for calibration.

In this talk, we focus on two estimation problems: i) sparse regression and ii) sparse and smooth signal recovery. The first one is known to be NP-hard; we show that the second one is equivalent to a submodular minimization problem and, hence, is polynomially solvable. For both problems, we derive a sequence of strong convex relaxations. These relaxations are based on the ideal (convex-hull) formulations for rank-one/pairwise quadratic terms with indicator variables. The new relaxations can be formulated as conic quadratic or semidefinite optimization problems in an extended space; they are stronger and more general than the state-of-the-art models with the reverse Huber penalty and the minimax concave penalty functions. Furthermore, the proposed rank-one strengthening can be interpreted as a non-separable, non-convex, unbiased sparsity-inducing regularizer, which dynamically adjusts its penalty according to the shape of the estimation error function without inducing bias for the sparse solutions. Computational experiments with benchmark datasets show that the proposed conic formulations are solved fast and result in near-optimal estimators for non-convex L0-problems. Moreover, the resulting estimators also outperform L1 approaches from a statistical perspective, achieving high prediction accuracy and good interpretability.

This talk is based on the following papers with Andres Gomez & Shaoning Han:

https://atamturk.ieor.berkeley.edu/pubs/rank-one.pdf
https://atamturk.ieor.berkeley.edu/pubs/screening.pdf
https://atamturk.ieor.berkeley.edu/pubs/signal-estimation.pdf
Departmental Seminar (899) Departmental Seminar (899)
Departmental Seminar (899)

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content