Presented By: Michigan Robotics
Situation Awareness and Trust Methods for Improving Human-Robot Team Performance
Robotics PhD Defense, Arsha Ali

RSVP requested.
FRB 2300 and Zoom (passcode: mavric).
Co-chairs: Dawn Tilbury and Lionel Robert
Abstract:
The field of human-robot interaction is dedicated to understanding how humans interact with robots, and developing interventions to enable desirable interactions. Within the expansive field of human-robot interaction, we explore how two fundamental concepts, trust and situation awareness, can facilitate and apply to interactions between humans and robots. Methods are presented for allocating tasks between humans and robots using trust, understanding team trust dynamics, and perceiving and enhancing situation awareness.
To address the problem of allocating tasks between humans and robots, we present a task allocation method that can allocate both familiar and novel tasks and learn unknown agent capabilities by incorporating trust from a robot.
We move towards fostering appropriate trust by demonstrating how a team’s trust in and engagement with autonomy did not decrease in a longitudinal experiment, despite the autonomy’s persistent unreliability.
A multi-phase solution is presented for the problem of improving situation awareness. The solution combines experimentation to uncover how shared mental models and communication factors influence situation awareness, leading to a real-time situation awareness estimation and robot communication adaptation method that we demonstrate to improve situation awareness and performance.
The presented work enriches our knowledge on how humans team with robots, moving closer to a reality where humans and robots harmoniously work in unison to achieve goals.
FRB 2300 and Zoom (passcode: mavric).
Co-chairs: Dawn Tilbury and Lionel Robert
Abstract:
The field of human-robot interaction is dedicated to understanding how humans interact with robots, and developing interventions to enable desirable interactions. Within the expansive field of human-robot interaction, we explore how two fundamental concepts, trust and situation awareness, can facilitate and apply to interactions between humans and robots. Methods are presented for allocating tasks between humans and robots using trust, understanding team trust dynamics, and perceiving and enhancing situation awareness.
To address the problem of allocating tasks between humans and robots, we present a task allocation method that can allocate both familiar and novel tasks and learn unknown agent capabilities by incorporating trust from a robot.
We move towards fostering appropriate trust by demonstrating how a team’s trust in and engagement with autonomy did not decrease in a longitudinal experiment, despite the autonomy’s persistent unreliability.
A multi-phase solution is presented for the problem of improving situation awareness. The solution combines experimentation to uncover how shared mental models and communication factors influence situation awareness, leading to a real-time situation awareness estimation and robot communication adaptation method that we demonstrate to improve situation awareness and performance.
The presented work enriches our knowledge on how humans team with robots, moving closer to a reality where humans and robots harmoniously work in unison to achieve goals.