Examining the Role of AI Information-Sharing on Trust in Human-AI Teams
Creating reliable human-AI teams with strong team cognition requires a deeper understanding of how humans perceive and react to various information-sharing tactics employed by an AI teammate. The current study details a factorial survey investigating how participants’ trust in their AI and human teammates changes across various forms of AI information sharing. These information-sharing attributes included backup behavior, explainability, team member statuses, intra- and extra-team changes, augmenting team memory, and control. The study found that participants’ trust in their AI teammate benefited most from information about changes inside and outside the team, but all conditions outperformed the control. Further, the back-up behavior information negatively affected the participants’ trust in their human teammate as the AI corrected the human teammates’ actions. These findings demonstrate the utility of a specific form of situation awareness information, while highlighting that corrective back-up behavior information from an AI may harm perceptions between human teammates.