Presented By: Michigan Robotics
Architectures for Safe Autonomy: Provable Guarantees Across Control, Planning, and Perception
PhD Defense, Devansh Agrawal

Chair: Dimitra Panagou
Abstract
This thesis addresses the design of safety-critical autonomous systems - systems that must always satisfy a set of safety constraints. The primary objective is to develop a cohesive architecture for the entire autonomy stack, ensuring that, under specific and verifiable assumptions, a robot can execute its mission while respecting these constraints.
To achieve this, we take a bottom-up approach, beginning with the design of a safety-critical controller and identifying the necessary assumptions for its safe operation. These assumptions impose requirements on upstream autonomy modules, such as planning and perception. We then develop methods to construct these modules in a way that preserves safety guarantees across the entire autonomy stack.
The main contributions of this thesis include: (A) the gatekeeper architecture - a flexible framework for establishing rigorous safety guarantees at the planning level, (B) the development of certifiably correct perception algorithms that not only produce accurate obstacle maps but also provide error bounds to ensure correctness despite odometry drift, and (C) the introduction of clarity and perceivability - concepts that quantify a robotic system’s ability to gather information about its environment, taking into account the environment model, as well as the robot’s actuation and sensing capabilities.
For each of these contributions, we provide formal proofs and demonstrate their practical effectiveness through simulations and hardware experiments with aerial and mobile robots.
Zoom passcode: opensesame
Abstract
This thesis addresses the design of safety-critical autonomous systems - systems that must always satisfy a set of safety constraints. The primary objective is to develop a cohesive architecture for the entire autonomy stack, ensuring that, under specific and verifiable assumptions, a robot can execute its mission while respecting these constraints.
To achieve this, we take a bottom-up approach, beginning with the design of a safety-critical controller and identifying the necessary assumptions for its safe operation. These assumptions impose requirements on upstream autonomy modules, such as planning and perception. We then develop methods to construct these modules in a way that preserves safety guarantees across the entire autonomy stack.
The main contributions of this thesis include: (A) the gatekeeper architecture - a flexible framework for establishing rigorous safety guarantees at the planning level, (B) the development of certifiably correct perception algorithms that not only produce accurate obstacle maps but also provide error bounds to ensure correctness despite odometry drift, and (C) the introduction of clarity and perceivability - concepts that quantify a robotic system’s ability to gather information about its environment, taking into account the environment model, as well as the robot’s actuation and sensing capabilities.
For each of these contributions, we provide formal proofs and demonstrate their practical effectiveness through simulations and hardware experiments with aerial and mobile robots.
Zoom passcode: opensesame