Skip to Content

Sponsors

No results

Tags

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Michigan Robotics

Improving Collaboration Between Drivers and Automated Vehicles with Trust Processing Methods

PhD Defense, Hebert Azevedo Sa

Virtual autonomous vehicles make a left turn Virtual autonomous vehicles make a left turn
Virtual autonomous vehicles make a left turn
Trust has gained attention in the Human-Robot Interaction (HRI) field, as it is considered an antecedent of people's reliance on machines.
People rely on and use machines they trust and refrain from using machines they do not trust. The advances in robotic perception technologies open paths for the development of machines that can be aware of people's trust by observing humans' behaviors and identifying whether they are being trusted or not by those people. This dissertation explores the role and the intricacies of trust in the interactions of humans and robots, particularly Automated Vehicles (AVs).
Novel methods and models are proposed for perceiving and processing drivers' trust in AVs and for determining humans' natural trust or robots' artificial trust. Two high-level problems are addressed: the problem of avoiding or reducing miscalibrations of drivers' trust in AVs and the problem of how to use trust to dynamically allocate tasks between a human and a robot that collaborate.

A complete solution is proposed for the problem of avoiding or reducing trust miscalibrations, which combines methods for estimating and influencing drivers' trust through interactions with the AV. Three main contributions stem from that solution: the characterization of risk factors that affect drivers’ trust in AVs; the development of a new method for real-time trust estimation; and the development of a new method for trust calibration.

Although the development of a complete trust-based solution for the problem of dynamically allocating tasks between a human and a robot remains an open problem, this dissertation takes a step forward in that direction. The fourth contribution here proposed is the development of a unified bi-directional model for predicting natural and artificial trust. This trust model allows for the numerical computation of human's trust and robot's trust, which is represented by the probability of a given agent to successfully execute a given task. As a probability of success, trust can readily be used for the computation of expected rewards and costs for tasks to be executed by each possible agent and can guide decision-making algorithms based on the optimization of those rewards and costs.
Virtual autonomous vehicles make a left turn Virtual autonomous vehicles make a left turn
Virtual autonomous vehicles make a left turn

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content