Addressing the Role of Context on Trust in Human-AI Teams: The Influence of Team Role and Violation Type in High-Risk Tasks

The current paper reports on an experiment examining how contextual factors influence trust, perceived ethicality, and performance in human-AI teams undertaking a high-risk, action-based task within a military setting. The study examined the impact of team role and trust violation framing on trust, perceived ethicality, and the efficacy of four trust repair strategies when an AI teammate commits an unethical action. Results indicated that trust and perceived ethicality of the AI team member were significantly higher when ethical violations were framed as integrity-based violations rather than competency-based violations. Additionally, those in the Ground role, who relied more on the AI for their safety, also had higher trust and ethicality ratings for the AI. However, trust repair strategies did not significantly impact trust in the AI team member after an ethical violation. These results highlight the significance of context in determining trust in response to AI ethical violations.

Download Paper
Previous
Previous

Investigating the Effects of Perceived Teammate Artificiality on Human Performance and Cognition

Next
Next

Adaptive Autonomy as a Means for Implementing Shared Ethics in Human-AI Teams