skip to main content
10.1145/3527188.3563923acmotherconferencesArticle/Chapter ViewAbstractPublication PageshaiConference Proceedingsconference-collections
short-paper

Evaluating Human-Artificial Agent Decision Congruence in a Coordinated Action Task

Published: 05 December 2022 Publication History

Abstract

Recommender systems designed to augment human decision-making in multi-agent tasks need to not only recommend actions that align with the task goal, but which also maintain coordinative behaviors between agents. Further, if these systems are to be used for skill training, they need to impart implicit learning to its users. This work compared a recommender system trained using deep reinforcement learning to a heuristic-based system in recommending actions to human participants teaming with an artificial agent during a collaborative problem-solving task. In addition to evaluating task performance and learning, we also evaluate the extent to which the human action are congruent with the recommended actions.

References

[1]
Massimiliano L. Cappuccio, Jai C. Galliott, and Eduardo B. Sandoval. 2021. Saving Private Robot: Risks and Advantages of Anthropomorphism in Agent-Soldier Teams. International Journal of Social Robotics3 (2 2021), 1–14. https://doi.org/10.1007/S12369-021-00755-Z/TABLES/1
[2]
Micah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, and Anca Dragan. 2019. On the Utility of Learning about Humans for Human-AI Coordination. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019). http://arxiv.org/abs/1910.05789
[3]
Signe Egenberg, Gileard Masenga, Lars Edvin Bru, Torbjørn Moe Eggebø, Cecilia Mushi, Deodatus Massay, and Pål Øian. 2017. Impact of multi-professional, scenario-based training on postpartum hemorrhage in Tanzania: A quasi-experimental, pre- vs. post-intervention study. BMC Pregnancy and Childbirth 17, 1 (9 2017), 1–11. https://doi.org/10.1186/S12884-017-1478-2/TABLES/2
[4]
Patrick Nalepka, Jordan P. Gregory-Dunsmore, James Simpson, Gaurav Patil, and Michael J. Richardson. 2021. Interaction Flexibility in Artificial Agents Teaming with Humans. In Proceedings of the 43nd Annual Conference of the Cognitive Science Society.
[5]
Patrick Nalepka, Rachel W. Kallen, Anthony Chemero, Elliot Saltzman, and Michael J. Richardson. 2017. Herd Those Sheep: Emergent Multiagent Coordination and Behavioral-Mode Switching. Psychological Science 28, 5 (5 2017), 630–650. https://doi.org/10.1177/0956797617692107
[6]
Patrick Nalepka, Maurice Lamb, Rachel W. Kallen, Kevin Shockley, Anthony Chemero, Elliot Saltzman, and Michael J. Richardson. 2019. Human social motor solutions for human–machine interaction in dynamical task contexts. Proceedings of the National Academy of Sciences of the United States of America 116, 4 (1 2019), 1437–1446. https://doi.org/10.1073/pnas.1813164116
[7]
Gaurav Patil, Patrick Nalepka, Lillian Rigoli, Rachel W. Kallen, and Michael J. Richardson. 2021. Dynamical Perceptual-Motor Primitives for Better Deep Reinforcement Learning Agents. In Practical Applications of Multiagent Shepherding for Human-Machine Interaction 2021.
[8]
Lillian M. Rigoli, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, and Michael J. Richardson. 2022. A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems. Journal of Cognitive Engineering and Decision Making (2022).
[9]
Eduardo Salas, Heather A Priest, Katherine A Wilson, and C Shawn Burke. 2006. Scenario-Based Training: Improving Military Mission Performance and Adaptability.In Military life: The psychology of serving in peace and combat: Operational stress, Vol. 2. Praeger Security International, Westport, CT, 32–53.
[10]
Emanuel Todorov and Michael I. Jordan. 2002. Optimal feedback control as a theory of motor coordination. Nature Neuroscience 5, 11 (11 2002), 1226–1235. https://doi.org/10.1038/nn963

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
HAI '22: Proceedings of the 10th International Conference on Human-Agent Interaction
December 2022
352 pages
ISBN:9781450393232
DOI:10.1145/3527188
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 December 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. decision making
  2. hierarchical deep reinforcement learning
  3. multi-agent coordination
  4. recommender system
  5. shepherding

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Funding Sources

  • Australian Department of Defence, Human Performance Research Network
  • National Computational Infrastructure, Australia
  • Macquarie University Research Fellowship
  • Australian Research Council Future Fellowship

Conference

HAI '22
HAI '22: International Conference on Human-Agent Interaction
December 5 - 8, 2022
Christchurch, New Zealand

Acceptance Rates

Overall Acceptance Rate 121 of 404 submissions, 30%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 75
    Total Downloads
  • Downloads (Last 12 months)21
  • Downloads (Last 6 weeks)0
Reflects downloads up to 21 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media