Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Applied Interdisciplinary Mathematics (AIM) Seminar - Department of Mathematics

AIM Seminar / MCAIM Colloquium: From Classical Regression to the Modern Regime: Surprises for Linear Least Squares Problems

Rishi Sonthalia, UCLA

Linear regression is a problem that has been extensively studied. However, modern machine learning has brought to light many new and exciting phenomena due to overparameterization. In this talk, I briefly introduce the new phenomena observed in recent years. Then, building on this, I present recent theory work on linear denoising. Despite the importance of denoising in modern machine learning and ample empirical work on supervised denoising, its theoretical understanding is still relatively scarce. One concern about studying supervised denoising is that one might not always have noiseless training data from the test distribution. It is more reasonable to have access to noiseless training data from a different dataset than the test dataset. Motivated by this, we study supervised denoising and noisy-input regression under distribution shift. We add three considerations to increase the applicability of our theoretical insights to real-life data and modern machine learning. First, we assume that our data matrices are low-rank. Second, we drop independence assumptions on our data. Third, the rise in computational power and dimensionality of data have made it essential to study non-classical learning regimes. Thus, we work in the non-classical proportional regime, where data dimension $d$ and number of samples N grow as d/N = c + o(1).

For this setting, we derive general test error expressions for both denoising and noisy-input regression and study when overfitting the noise is benign, tempered, or catastrophic. We show that the test error exhibits double descent under general distribution shifts, providing insights for data augmentation and the role of noise as an implicit regularizer. We also perform experiments using real-life data, matching the theoretical predictions with under 1% MSE error for low-rank data.

Talk will be in-person and on Zoom: https://umich.zoom.us/j/98734707290

[Contact: Peter Miller]

Explore Similar Events

  •  Loading Similar Events...

Keywords


Back to Main Content