skip to main content
10.5555/2936924.2937071acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
research-article

The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams

Published: 09 May 2016 Publication History

Abstract

Researchers have observed that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain effective team performance even when the system is less than 100% reliable. However, current explanation algorithms are not sufficient for making a robot's quantitative reasoning (in terms of both uncertainty and conflicting goals) transparent to human teammates. In this work, we develop a novel mechanism for robots to automatically generate explanations of reasoning based on Partially Observable Markov Decision Problems (POMDPs). Within this mechanism, we implement alternate natural-language templates and then measure their differential impact on trust and team performance within an agent-based online test-bed that simulates a human-robot team task. The results demonstrate that the added explanation capability leads to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robot's explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot interaction.

References

[1]
B. Adams, L. Bruyn, S. Houde, and P. Angelopoulos. Trust in automated systems: literature review. Technical Report DRDC-TORONTO-CR-2003-096, Defence Research Reports, 2003.
[2]
W. Bluethmann, R. Ambrose, M. Diftler, S. Askew, E. Huber, M. Goza, F. Rehnmark, C. Lovchik, and D. Magruder. Robonaut: A robot designed to work with humans in space. Autonomous Robots, 14(2--3):179--197, 2003.
[3]
J. L. Burke, R. R. Murphy, M. D. Coovert, and D. L. Riddle. Moonlight in Miami: Field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise. Human-Computer Interaction, 19(1--2):85--116, 2004.
[4]
A. R. Cassandra, L. P. Kaelbling, and J. A. Kurien. Acting under uncertainty: Discrete Bayesian models for mobile-robot navigation. In IROS, volume 2, pages 963--972, 1996.
[5]
H. Cramer, V. Evers, S. Ramlal, M. Van Someren, L. Rutledge, N. Stash, L. Aroyo, and B. Wielinga. The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction, 18(5):455--496, 2008.
[6]
P. Doshi and D. Perez. Generalized point based value iteration for interactive pomdps. In AAAI, pages 63--68, 2008.
[7]
M. T. Dzindolet, S. A. Peterson, R. A. Pomranky, L. G. Pierce, and H. P. Beck. The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6):697--718, 2003.
[8]
F. Elizalde, L. E. Sucar, M. Luque, J. Diez, and A. Reyes. Policy explanation in factored markov decision processes. In Proceedings of the European WS on Probabilistic Graphical Models, pages 97--104, 2008.
[9]
P. J. Gmytrasiewicz and E. H. Durfee. A rigorous, operational formalization of recursive modeling. In ICMAS, pages 125--132, 1995.
[10]
V. Greco and D. Roger. Coping with uncertainty: The construction and validation of a new measure. Personality and individual differences, 31(4):519--534, 2001.
[11]
S. G. Hart and L. E. Staveland. Development of NASA-TLX (task load index): Results of empirical and theoretical research. Advances in Psychology, 52:139--183, 1988.
[12]
L. Hendrickx, C. Vlek, and H. Oppewal. Relative importance of scenario information and frequency information in the judgment of risk. Acta Psychologica, 72(1):41--63, 1989.
[13]
W. L. Johnson and A. Valente. Tactical language and culture training systems: Using AI to teach foreign languages and cultures. AI Magazine, 30(2), 2009.
[14]
L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99--134, 1998.
[15]
O. Z. Khan, P. Poupart, and J. P. Black. Minimal sufficient explanations for factored Markov Decision Processes. In ICAPS. Citeseer, 2009.
[16]
J. M. Kim, J. Randall W. Hill, P. J. Durlach, H. C. Lane, E. Forbell, M. Core, S. Marsella, D. Pynadath, and J. Hart. BiLAT: A game-based environment for practicing negotiation in a cultural context. IJAIED: Special Issue on Ill-Defined Domains, 19(3):289--308, 2009.
[17]
J. Klatt, S. Marsella, and N. Krämer. Negotiations in the context of AIDS prevention: An agent-based model using theory of mind. In IVA, 2011.
[18]
S. Koenig and R. Simmons. Xavier: A robot navigation architecture based on partially observable Markov decision process models. In D. Kortenkamp, R. P. Bonasso, and R. R. Murphy, editors, AI Based Mobile Robotics: Case Studies of Successful Robot Systems, pages 91--122. MIT Press, 1998.
[19]
J. Lee and N. Moray. Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10):1243--1270, 1992.
[20]
J. D. Lee and K. A. See. Trust in automation: Designing for appropriate reliance. Human Factors, 46(1):50--80, 2004.
[21]
S. C. Marsella, D. V. Pynadath, and S. J. Read. PsychSim: Agent-based modeling of social interactions and influence. In ICCM, pages 243--248, 2004.
[22]
R. C. Mayer, J. H. Davis, and F. D. Schoorman. An integrative model of organizational trust. Academy of Management Review, 20(3):709--734, 1995.
[23]
R. McAlinden, A. Gordon, H. C. Lane, and D. Pynadath. UrbanSim: A game-based simulation for counterinsurgency and stability-focused operations. In Proceedings of the AIED WS on Intelligent Educational Games, 2009.
[24]
D. H. McKnight, V. Choudhury, and C. Kacmar. Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3):334--359, 2002.
[25]
S. L. McShane. Propensity to trust scale, 2014. http://highered.mheducation.com/sites/0073381225/student_view0/chapter7/self-assessment_7_4.html.
[26]
L. C. Miller, S. Marsella, T. Dey, P. R. Appleby, J. L. Christensen, J. Klatt, and S. J. Read. Socially optimized learning in virtual environments (SOLVE). In ICIDS, 2011.
[27]
R. Parasuraman and V. Riley. Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2):230--253, 1997.
[28]
J. Pineau, M. Montemerlo, M. Pollack, N. Roy, and S. Thrun. Towards robotic assistants in nursing homes: Challenges and results. Robotics and Autonomous Systems, 42(3):271--281, 2003.
[29]
D. V. Pynadath and S. C. Marsella. PsychSim: Modeling theory of mind with decision-theoretic agents. In IJCAI, pages 1181--1186, 2005.
[30]
J. M. Ross. Moderators of trust and reliance across multiple decision aids. PhD thesis, University of Central Florida, 2008.
[31]
S. Ross, J. Pineau, S. Paquet, and B. Chaib-Draa. Online planning algorithms for POMDPs. JAIR, 32:663--704, 2008.
[32]
K. E. Schaefer. The perception and measurement of human-robot trust. PhD thesis, University of Central Florida Orlando, Florida, 2013.
[33]
W. R. Swartout and J. D. Moore. Explanation in second generation expert systems. In Second generation expert systems, pages 543--585. Springer, 1993.
[34]
D. S. Syrdal, K. Dautenhahn, K. L. Koay, and M. L. Walters. The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. Adaptive and Emergent Behaviour and Complex Systems, 2009.
[35]
R. Taylor. Situational awareness rating technique (SART): The development of a tool for aircrew systems design. AGARD, Situational Awareness in Aerospace Operations, 1990.
[36]
V. H. Visschers, R. M. Meertens, W. W. Passchier, and N. N. De Vries. Probability information in risk communication: a review of the research literature. Risk Analysis, 29(2):267--287, 2009.
[37]
N. Wang and D. V. Pynadath. Building trust in a human-robot team. In I/ITSEC, 2015.
[38]
N. Wang, D. V. Pynadath, K. Unnikrishnan, S. Shankar, and C. Merchant. Intelligent agents for virtual simulation of human-robot interaction. In Virtual, Augmented and Mixed Reality, pages 228--239. Springer, 2015.
[39]
E. A. Waters, N. D. Weinstein, G. A. Colditz, and K. Emmons. Formats for improving risk communication in medical tradeoff decisions. Journal of health communication, 11(2):167--182, 2006.
[40]
L. R. Ye and P. E. Johnson. The impact of explanation facilities on user acceptance of expert systems advice. MIS Quarterly, 19(2):157--172, 1995.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
AAMAS '16: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems
May 2016
1580 pages
ISBN:9781450342391

Sponsors

  • IFAAMAS

In-Cooperation

Publisher

International Foundation for Autonomous Agents and Multiagent Systems

Richland, SC

Publication History

Published: 09 May 2016

Check for updates

Author Tags

  1. explainable ai
  2. human-robot interaction
  3. pomdps
  4. trust

Qualifiers

  • Research-article

Funding Sources

  • US Army Research Laboratory

Conference

AAMAS '16
Sponsor:

Acceptance Rates

AAMAS '16 Paper Acceptance Rate 137 of 550 submissions, 25%;
Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)28
  • Downloads (Last 6 weeks)1
Reflects downloads up to 06 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media