skip to main content
10.1145/3687272.3688300acmconferencesArticle/Chapter ViewAbstractPublication PageshaiConference Proceedingsconference-collections
research-article

Static Socio-demographic and Individual Factors for Generating Explanations in XAI: Can they serve as a prior in DSS for adaptation of explanation strategies?

Published: 24 November 2024 Publication History

Abstract

Current XAI research shows that explanations of AI need to be tailored to the individual explainee. We investigate whether XAI explanations can be successfully adapted to humans, when based on an appropriate static partner model representing relevant features. More specifically, we analyze the effects of static socio-demographic and individual factors on the advice-taking of different explanation strategies in a human-agent-interaction scenario. Results showed significant effects of the participant’s risk value, the mathematical self-assessment and the distance and direction of the advice to the first selection on advice-taking. Leveraging these results for an adaptation scheme, we train a classifier to predict a suitable explanation strategy based on all static features. We compare this classifier to results from a classifier working on dynamic features. Our results show that dynamic factors are as important as the static ones. Using static and dynamic factors increased the classifier’s accuracy, but the model showed overfitting and no generalization. Dividing the dataset by nationality yielded better generalization performance, indicating that nationality has an effect on predicting advice-taking. In addition, we propose an adapted measurement for advice-taking that considers the adaption beyond the given advice in advice-taking.

References

[1]
Amal Abdulrahman, Deborah Richards, and Ayse Aysin Bilgin. 2022. Exploring the influence of a user-specific explainable virtual advisor on health behaviour change intentions. Autonomous Agents and Multi-Agent Systems 36, 1 (2022), 25.
[2]
Phoebe E Bailey, Tarren Leon, Natalie C Ebner, Ahmed A Moustafa, and Gabrielle Weidemann. 2023. A meta-analysis of the weight of advice in decision-making. Current Psychology 42, 28 (2023), 24516–24541.
[3]
Orit Baruth and Anat Cohen. 2023. Personality and satisfaction with online courses: The relation between the Big Five personality traits and satisfaction with online learning activities. Education and Information Technologies 28, 1 (2023), 879–904. https://doi.org/10.1007/s10639-022-11199-x
[4]
DIW Berlin 2015. SOEP 2015-Erhebungsinstrumente 2015 (Welle 32) des Sozio-oekonomischen Panels: Haushaltsfragebogen, Altstichproben. Technical Report. SOEP Survey Papers.
[5]
Margaret M Bradley and Peter J Lang. 1994. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry 25, 1 (1994), 49–59.
[6]
Scott Brave and Cliff Nass. 2007. Emotion in human-computer interaction. In The human-computer interaction handbook. CRC Press, 103–118. https://doi.org/10.1201/9781410615862
[7]
Hendrik Buschmeier, Heike M. Buhl, Friederike Kern, Angela Grimminger, Helen Beierling, Josephine Fisher, André Groß, Ilona Horwath, Nils Klowait, Stefan Lazarov, Michael Lenke, Vivien Lohmer, Katharina Rohlfing, Ingrid Scharlau, Amit Singh, Lutz Terfloth, Anna-Lisa Vollmer, Yu Wang, Annedore Wilmes, and Britta Wrede. 2023. Forms of Understanding of XAI-Explanations. arxiv:2311.08760
[8]
Ronald Böck and Britta Wrede. 2019. Modelling Contexts for Interactions in Dynamic Open-World Scenarios. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). 1459–1464. https://doi.org/10.1109/SMC.2019.8914656
[9]
Nuno Camacho, Martijn De Jong, and Stefan Stremersch. 2014. The effect of customer empowerment on adherence to expert advice. International Journal of Research in Marketing 31, 3 (2014), 293–308.
[10]
Robin Cugny, Julien Aligon, Max Chevalier, Geoffrey Roman Jimenez, and Olivier Teste. 2022. AutoXAI: A Framework to Automatically Select the Most Adapted XAI Solution. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management (Atlanta, GA, USA) (CIKM ’22). Association for Computing Machinery, New York, NY, USA, 315–324. https://doi.org/10.1145/3511808.3557247
[11]
Achiya Elyasaf and Arnon Sturm. 2022. Modeling Context-aware Systems: A Conceptualized Framework. In MODELSWARD. 26–35.
[12]
Josefine Finke, Ilona Horwath, Tobias Matzner, and Christian Schulz. 2022. (De)Coding Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems. In Artificial Intelligence in HCI. Springer International Publishing, 149–160. https://doi.org/10.1007/978-3-031-05643-7_10
[13]
Jürgen Fleiß, Elisabeth Bäck, and Stefan Thalmann. 2024. Mitigating algorithm aversion in recruiting: A study on explainable AI for conversational agents. ACM SIGMIS Database: the DATABASE for Advances in Information Systems 55, 1 (2024), 56–87.
[14]
Thomas Franke, Christiane Attig, and Daniel Wessel. 2019. A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale. International Journal of Human–Computer Interaction 35, 6 (2019), 456–467.
[15]
Barbara L Fredrickson. 2013. Positive emotions broaden and build. In Advances in experimental social psychology. Vol. 47. Elsevier, 1–53.
[16]
André Groß, Amit Singh, Ngoc Chi Banh, Birte Richter, Ingrid Scharlau, Katharina J. Rohlfing, and Britta Wrede. 2023. Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue. Frontiers in Robotics and AI 10 (2023). https://doi.org/10.3389/frobt.2023.1236184
[17]
Seunghee Han, Jennifer S Lerner, and Dacher Keltner. 2007. Feelings and consumer decision making: The appraisal-tendency framework. Journal of consumer psychology 17, 3 (2007), 158–168.
[18]
Lukas Hindemith, Jan Philip Göpfert, Christiane B Wiebel-Herboth, Britta Wrede, and Anna-Lisa Vollmer. 2021. Why robots should be technical: Correcting mental models through technical architecture concepts. Interaction Studies 22, 2 (2021), 244–279.
[19]
Lennart Hofeditz, Sünje Clausen, Alexander Rieß, Milad Mirbabaie, and Stefan Stieglitz. 2022. Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring. Electronic Markets 32, 4 (2022), 2207–2233.
[20]
Charles A Holt and Susan K Laury. 2002. Risk aversion and incentive effects. American economic review 92, 5 (2002), 1644–1655.
[21]
Jan René Judek. 2024. Willingness to Use Algorithms Varies with Social Information on Weak vs. Strong Adoption: An Experimental Study on Algorithm Aversion. FinTech 3, 1 (2024), 55–65.
[22]
Patricia K. Kahr, Gerrit Rooks, Martijn C. Willemsen, and Chris C.P. Snijders. 2023. It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task. In Proceedings of the 28th International Conference on Intelligent User Interfaces (, Sydney, NSW, Australia, ) (IUI ’23). Association for Computing Machinery, New York, NY, USA, 528–539. https://doi.org/10.1145/3581641.3584058
[23]
Min J Kang and Colin F Camerer. 2013. fMRI evidence of a hot-cold empathy gap in hypothetical and real aversive choices. Frontiers in neuroscience 7 (2013), 48505.
[24]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized explanations for hybrid recommender systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 379–390. https://doi.org/10.1145/3301275.3302306
[25]
Olesja Lammert, Birte Richter, Christian Schütze, Kirsten Thommes, and Britta Wrede. 2024. Humans in XAI: increased reliance in decision-making under uncertainty by using explanation strategies. Frontiers in Behavioral Economics 3 (2024), 1377075.
[26]
George Loewenstein. 2005. Hot-cold empathy gaps and medical decision making.Health psychology 24, 4S (2005), S49.
[27]
Ingo Lütkebohle, Frank Hegel, Simon Schulz, Matthias Hackel, Britta Wrede, Sven Wachsmuth, and Gerhard Sagerer. 2010. The bielefeld anthropomorphic robot head “Flobi”. In 2010 IEEE International Conference on Robotics and Automation. IEEE, 3384–3391. https://doi.org/10.1109/ROBOT.2010.5509173
[28]
Marco Matarese, Francesca Cocchella, Francesco Rea, and Alessandra Sciutti. 2023. Natural Born Explainees: how users’ personality traits shape the human-robot interaction with explainable robots. In 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1786–1793. https://doi.org/10.1109/RO-MAN57019.2023.10309636
[29]
Diana Kathleen May. 2009. Mathematics self-efficacy and anxiety questionnaire. Ph. D. Dissertation. University of Georgia Athens, GA, USA.
[30]
Hugo Mercier, Hiroshi Yama, Yayoi Kawasaki, Kuniko Adachi, and Jean-Baptiste Van der Henst. 2012. Is the use of averaging in advice taking modulated by culture?Journal of Cognition and Culture 12, 1-2 (2012), 1–16.
[31]
Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. arxiv:1712.00547
[32]
Atte Oksanen, Nina Savela, Rita Latikka, and Aki Koivula. 2020. Trust toward robots and artificial intelligence: An experimental approach to human–technology interactions online. Frontiers in Psychology 11 (2020), 568256.
[33]
Beatrice Rammstedt. 2007. The 10-item big five inventory. European Journal of Psychological Assessment 23, 3 (2007), 193–201.
[34]
Katharina J Rohlfing, Philipp Cimiano, Ingrid Scharlau, Tobias Matzner, Heike M Buhl, Hendrik Buschmeier, Elena Esposito, Angela Grimminger, Barbara Hammer, Reinhold Häb-Umbach, 2020. Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Transactions on Cognitive and Developmental Systems 13, 3 (2020), 717–728.
[35]
Astrid M Rosenthal-von der Pütten, Nicole C Krämer, Laura Hoffmann, Sabrina Sobieraj, and Sabrina C Eimler. 2013. An experimental study on emotional reactions towards a robot. International Journal of Social Robotics 5 (2013), 17–34.
[36]
Ute Schmid and Britta Wrede. 2022. What is missing in xai so far? an interdisciplinary perspective. KI-Künstliche Intelligenz 36, 3 (2022), 303–315.
[37]
Cornelia Sindermann, Peng Sha, Min Zhou, Jennifer Wernicke, Helena S Schmitt, Mei Li, Rayna Sariyska, Maria Stavrou, Benjamin Becker, and Christian Montag. 2021. Assessing the attitude towards artificial intelligence: Introduction of a short measure in German, Chinese, and English language. KI-Künstliche intelligenz 35, 1 (2021), 109–118.
[38]
Kacper Sokol and Peter Flach. 2020. One explanation does not fit all: The promise of interactive explanations for machine learning transparency. KI-Künstliche Intelligenz 34, 2 (2020), 235–250.
[39]
K. Thommes, O. Lammert, C. Schütze, B. Richter, and B. Wrede. 2024. Human emotions in AI explanations. In Proc. 2nd World Conf. on eXplainable Artificial Intelligence.
[40]
Yu-Cheng Wang and Toly Chen. 2024. Adapted techniques of explainable artificial intelligence for explaining genetic algorithms on the example of job scheduling. Expert Systems with Applications 237 (2024), 121369. https://doi.org/10.1016/j.eswa.2023.121369
[41]
Katharina Weitz, Alexander Zellner, and Elisabeth André. 2022. What Do End-Users Really Want? Investigation of Human-Centered XAI for Mobile Health Apps. arxiv:2210.03506

Index Terms

  1. Static Socio-demographic and Individual Factors for Generating Explanations in XAI: Can they serve as a prior in DSS for adaptation of explanation strategies?

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HAI '24: Proceedings of the 12th International Conference on Human-Agent Interaction
    November 2024
    502 pages
    ISBN:9798400711787
    DOI:10.1145/3687272
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 November 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Advice taking
    2. XAI
    3. adapted explanations
    4. emotions
    5. explanation strategies
    6. personality

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    HAI '24
    Sponsor:
    HAI '24: International Conference on Human-Agent Interaction
    November 24 - 27, 2024
    Swansea, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 121 of 404 submissions, 30%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 33
      Total Downloads
    • Downloads (Last 12 months)33
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 31 Jan 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media