Skip to Content

Sponsors

No results

Tags

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Abstract: As more and more machine learning (ML) and artificial intelligence (AI) systems are used in practical settings, it is essential that they exhibit responsible, trustworthy, and safe behavior. This dissertation contributes to machine learning safety by focusing on the key aspects of value alignment, data quality, distributional robustness, and their intersections. It is largely motivated by concerns about discrepancies between societal expectations and the performance of end-to-end ML systems, data quality degradation caused by privacy protocols or data noise, and distribution shifts posed by dynamic interactions between ML models and their environments.

This dissertation zeroes in on three problems that arise in machine learning safety. They are:
1. calibrated data-dependent constraints: guarantee real-world goals are satisfied with a user-prescribed probability at test time;
2. statistical cost of data noise in constrained learning: quantify the effect of sensitive attribute noise on the generalizability of (fairness) constraints;
3. model misspecification in performative prediction: design a robust learning framework to better approximate the performative optimum.

By carefully employing a blend of tools from distributionally robust optimization, stochastic optimization, and statistical inference, this dissertation tackles these pressing issues in ML safety. The methodologies and insights provided herein lay the groundwork toward a more ethical and effective deployment of ML and AI technologies, contributing to ensuring that these systems align with human values and exhibit reliability against the complexities of real-world applications.

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content