The Role of Autonomy Levels and Contextual Risk in Designing Safer AI Teammates
As AI becomes more intelligent and autonomous, the concept of human-AI teaming has become more realistic and attractive. Despite the promises of AI teammates, human-AI teams face new, unique challenges. One such challenge is the declining ability of human team members to detect and respond to AI failures as they become further removed from the AI’s decision-making loop. In this study, we conducted virtual experiments with twelve experts in two different teaming contexts, cyber incident response and medical triage, to understand how contextual risk impacts human teammate situational awareness and failure performance over a human-AI team’s action cycle. Our results indicate that situational awareness is more closely tied to context, while failure performance is more closely tied to the team’s action cycle. These results provide the foundation for future research into using contextual risk in determining optimal autonomy levels for AI teammates.