An Analysis of Ethical Rationales and Their Impact on the Perceived Moral Persona of AI Teammates

Morality of action, intention, and overall collaborative context is vital for any teaming endeavor, especially as we enter the milieu of human-AI teaming. Particularly, communication of intent and ethical reasoning can be crucial to how human actors perceive and construe the moral persona of AI teammates. We conducted an online experimental study comprised of four ethical justification conditions (deontology, utilitarianism, virtue, and control) alongside two contextual outcomes (Positive and Negative) to understand how different ethical justification frameworks given by AI impact human teammates’ moral perceptions of the AI in human-AI teams. The results indicate that deontology-based justifications led to heightened moral perceptions compared to other frameworks when the outcomes of decisions were contextually negative. Such findings can have vital implications for the robust design of AI teammates to manage contextual variations when engaged in critical decision-making contexts with ethical implications.

Download Paper
Previous
Previous

Designing Human-Autonomy Teaming Experiments Through Reinforcement Learning

Next
Next

Modeling and Guiding the Creation of Ethical Human-AI Teams