Robots in our society are commonly perceived as subordinate servants with a lower social status than humans. This often leads to humans prioritizing themselves during conflict situations. This becomes problematic when robots start to directly represent humans as proxies if people do not think of the human operator behind them. This could be considered a cognitive bias of human representation in HRI. To explore the extent of this problem, we conducted a user study featuring several conflict situations. Participants granted more priority to the robot when the human representation was visible. This paper explores the societal consequences and emerging inequities such as potentially deprioritizing humans by deprioritizing a robot in certain situations. Possible strategies to address potential negative consequences are discussed on a design level while acknowledging that a societal change in how we perceive and treat robots that represent humans might be necessary.
Funding Agencies|ELLIIT; Excellence Center at Linkoping-Lund in Information Technology