What Would I Do If…? Promoting Understanding in HRI through Real-Time Explanations in the Wild

Abstract

As robots become more integrated in human spaces, it is increasingly important for them to explain their decisions. These explanations need to be generated automatically in response to decisions taken in dynamic, unstructured environments. However, most research in explainable HRI only considers explanations (often manually selected) in controlled environments. We present an explanation generation method based on counterfactuals and demonstrate its use in an in-the-wild experiment using autonomous interactions with real people to assess the effect of these explanations on participants’ ability to predict the robot’s behavior in hypothetical scenarios. Our results suggest that explanations aid one’s ability to predict the robot’s behavior, but that the addition of counterfactual statements may add some cognitive burden, counteracting this benefit.

Publication
33th IEEE International Symposium on Robot and Human Interactive Communication

Related