Presented By: Industrial & Operations Engineering
IOE 899: Yue Wang
Learning for Autonomy: Trust Modeling in Human–Robot Collaboration and Distributional Multi-Agent Reinforcement Learning
This seminar begins with an overview of research at the Interdisciplinary & Intelligent Research (I2R) Lab focused on human–robot interaction and autonomy. We discuss computational models for human trust in robots, how trust can be quantified and learned, and how it can be incorporated into robot decision-making, motion planning, and control to achieve safer autonomy with higher user acceptance. Building on these models, the talk highlights learning-based approaches that enable robots to reason under uncertainty and adapt to human preferences. We consider reinforcement learning in which key quantities, such as value functions, are represented probabilistically rather than as point estimates. These methods enable more stable and dataefficient learning. We then address collaborative settings involving multiple agents. By inferring global context through structured local information exchange, these approaches support scalable, robust collaboration without reliance on centralized critics or global information.