skip to main content
10.1007/978-3-031-48421-6_22guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-Oriented Systems

Published: 28 November 2023 Publication History

Abstract

Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because the action-selection policy that underlies its decision-making essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to provide natural-language explanations of the decision-making of Deep RL. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance, and more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI’s ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.

References

[1]
Baresi L, Nitto ED, and Ghezzi C Toward open-world software: issue and challenges Computer 2006 39 10 36-43
[2]
Cambria E, Malandri L, Mercorio F, Mezzanzanica M, and Nobani N A survey on XAI and natural language explanations Inf. Process. Manag. 2023 60 1 103111
[3]
Camilli, M., Mirandola, R., Scandurra, P.: XSA: explainable self-adaptation. In: 37th International Conference on Automated Software Engineering (ASE 2022). ACM (2022)
[4]
Carneiro, D., Veloso, P., Guimarães, M., Baptista, J., Sousa, M.: A conversational interface for interacting with machine learning models. In: 4th International Workshop on eXplainable and Responsible AI and Law. CEUR Workshop Proceedings, vol. 3168. CEUR-WS.org (2021)
[5]
Dewey, D.: Reinforcement learning and the reward engineering principle. In: 2014 AAAI Spring Symposia, Stanford University, Palo Alto, California, USA, 24-26 March 2014. AAAI Press (2014)
[6]
Feit, F., Metzger, A., Pohl, K.: Explaining online reinforcement learning decisions of self-adaptive systems. In: International Conference on Autonomic Computing and Self-Organizing Systems, ACSOS 2022. IEEE (2022)
[7]
Følstad A et al. Future directions for chatbot research: an interdisciplinary research agenda Computing 2021 103 12 2915-2942
[8]
Gao M, Liu X, Xu A, and Akkiraju R Arai K Chat-XAI: a new chatbot to explain artificial intelligence Intelligent Systems and Applications 2022 Cham Springer 125-134
[9]
Ghanadbashi, S., Safavifar, Z., Taebi, F., Golpayegani, F.: Handling uncertainty in self-adaptive systems: an ontology-based reinforcement learning model. J. Reliable Intell. Environ. (2023)
[10]
Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, and Pedreschi D A survey of methods for explaining black box models ACM Comput. Surv. 2019 51 5 1-42
[11]
Hasal M, Nowaková J, Saghair KA, Abdulla HMD, Snásel V, and Ogiela L Chatbots: security, privacy, data protection, and social aspects Concurr. Comput. Pract. Exp. 2021 33 19 e6426
[12]
Huang V, Wang C, Ma H, Chen G, and Christopher K Troya J, Medjahed B, Piattini M, Yao L, Fernandez P, and Ruiz-Cortes A Cost-aware dynamic multi-workflow scheduling in cloud data center using evolutionary reinforcement learning Service-Oriented Computing 2022 Cham Springer 449-464
[13]
Iftikhar S et al. AI-based fog and edge computing: a systematic review, taxonomy and future directions Internet Things 2023 21 100674
[14]
Jamil B, Ijaz H, Shojafar M, Munir K, and Buyya R Resource allocation and task scheduling in fog computing and internet of everything environments: a taxonomy, review, and future directions ACM Comput. Surv. 2022 54 11s 1-38
[15]
Jentzsch SF, Höhn S, and Hochgeschwender N Calvaresi D, Najjar A, Schumacher M, and Främling K Conversational interfaces for explainable AI: a human-centred approach Explainable, Transparent Autonomous Agents and Multi-Agent Systems 2019 Cham Springer 77-92
[16]
Ji Z et al. Survey of hallucination in natural language generation ACM Comput. Surv. 2023 55 12 1-38
[17]
Juozapaitis, Z., Koul, A., Fern, A., Erwig, M., Doshi-Velez, F.: Explainable reinforcement learning via reward decomposition. In: IJCAI/ECAI Workshop on Explainable Artificial Intelligence (2019)
[18]
Kuźba M, Biecek P, et al. Koprinska I et al. What would you ask the machine learning model? identification of user needs for model explanations based on human-model conversations ECML PKDD 2020 Workshops 2020 Cham Springer 447-459
[19]
Liao, Q.V., Gruen, D.M., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Conference on Human Factors in Computing Systems (CHI ’20). ACM (2020)
[20]
Ma W and Xu H Skyline-enhanced deep reinforcement learning approach for energy-efficient and QoS-guaranteed multi-cloud service composition Appl. Sci. 2023 13 11 6826
[21]
Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: A grounded interaction protocol for explainable artificial intelligence. In: 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS19. International Foundation for Autonomous Agents and Multiagent Systems (2019)
[22]
Malandri L, Mercorio F, Mezzanzanica M, and Nobani N ConvXAI: a system for multimodal interaction with any black-box explainer Cogn. Comput. 2023 15 2 613-644
[23]
Mariotti, E., Alonso, J.M., Gatt, A.: Towards harnessing natural language generation to explain black-box models. In: 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence. ACL (2020)
[24]
Maslej, P., et al.: The AI index 2023 annual report. Technical report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University (2023)
[25]
Metzger A, Kley T, Rothweiler A, and Pohl K Automatically reconciling the trade-off between prediction accuracy and earliness in prescriptive business process monitoring Inf. Syst. 2023 118 102254
[26]
Metzger, A., Laufer, J., Feit, F., Pohl, K.: A user study on explainable online reinforcement learning for adaptive systems. CoRR abs/2307.04098 (2023)
[27]
Metzger, A., Quinton, C., Mann, Z.Á., Baresi, L., Pohl, K.: Realizing self-adaptive systems via online reinforcement learning and feature-model-guided exploration. Computing (2022)
[28]
Miller T Explanation in artificial intelligence: insights from the social sciences Artif. Intell. 2019 267 1-38
[29]
Mo R, Xu X, Zhang X, Qi L, and Liu Q Hacid H, Kao O, Mecella M, Moha N, and Paik H Computation offloading and resource management for energy and cost trade-offs with deep reinforcement learning in mobile edge computing Service-Oriented Computing 2021 Cham Springer 563-577
[30]
Mohseni S, Zarei N, and Ragan ED A multidisciplinary survey and framework for design and evaluation of explainable AI systems ACM Trans. Interact. Intell. Syst. 2021 11 3–4 1-45
[31]
Moreno, G.A., Schmerl, B.R., Garlan, D.: SWIM: an exemplar for evaluation and comparison of self-adaptation approaches for web applications. In: 13th International Conference on Software Engineering for Adaptive and Self-Managing Systems, SEAMS@ICSE 2018. ACM (2018)
[32]
Motger Q, Franch X, and Marco J Software-based dialogue systems: survey, taxonomy, and challenges ACM Comput. Surv. 2023 55 5 1-42
[33]
Mutanu L and Kotonya G State of runtime adaptation in service-oriented systems: what, where, when, how and right IET Softw. 2019 13 1 14-24
[34]
Nguyen, V.B., Schlötterer, J., Seifert, C.: Explaining machine learning models in natural conversations: towards a conversational XAI agent. CoRR abs/2209.02552 (2022)
[35]
Palm A, Metzger A, and Pohl K Dustdar S, Yu E, Salinesi C, Rieu D, and Pant V Online reinforcement learning for self-adaptive information systems Advanced Information Systems Engineering 2020 Cham Springer 169-184
[36]
Pham, H.V., et al.: Problems and opportunities in training deep learning software systems: an analysis of variance. In: 35th International Conference on Automated Software Engineering (ASE 2020). IEEE (2020)
[37]
Puiutta E and Veith EMSP Holzinger A, Kieseberg P, Tjoa AM, and Weippl E Explainable reinforcement learning: a survey Machine Learning and Knowledge Extraction 2020 Cham Springer 77-95
[38]
Razian MR, Fathian M, Bahsoon R, Toosi AN, and Buyya R Service composition in dynamic environments: a systematic review and future directions J. Syst. Softw. 2022 188 111290
[39]
Robnik-Šikonja M and Bohanec M Zhou J and Chen F Perturbation-based explanations of prediction models Human and Machine Learning 2018 Cham Springer 159-175
[40]
Sequeira P and Gervasio MT Interestingness elements for explainable reinforcement learning: understanding agents’ capabilities and limitations Artif. Intell. 2020 288 103367
[41]
Strobelt H et al. Interactive and visual prompt engineering for ad-hoc task adaptation with large language models IEEE Trans. Vis. Comput. Graph. 2023 29 1 1146-1156
[42]
Sutton RS and Barto AG Reinforcement Learning: An Introduction 2018 Cambridge MIT Press
[43]
White, J., et al.: A prompt pattern catalog to enhance prompt engineering with chatgpt. CoRR abs/2302.11382 (2023)
[44]
Yu, Z., et al.: DeepSCJD: an online deep learning-based model for secure collaborative job dispatching in edge computing. In: Troya, J., Medjahed, B., Piattini, M., Yao, L., Fernandez, P., Ruiz-Cortes, A. (eds.) Service-Oriented Computing. Lecture Notes in Computer Science, vol. 13740, pp. 481–497. Springer, Cham (2022).
[45]
Zhao, H., et al.: Explainability for large language models: a survey. CoRR abs/2309.01029 (2023)

Cited By

View all

Index Terms

  1. An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-Oriented Systems
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image Guide Proceedings
            Service-Oriented Computing: 21st International Conference, ICSOC 2023, Rome, Italy, November 28 – December 1, 2023, Proceedings, Part I
            Nov 2023
            429 pages
            ISBN:978-3-031-48420-9
            DOI:10.1007/978-3-031-48421-6
            • Editors:
            • Flavia Monti,
            • Stefanie Rinderle-Ma,
            • Antonio Ruiz Cortés,
            • Zibin Zheng,
            • Massimo Mecella

            Publisher

            Springer-Verlag

            Berlin, Heidelberg

            Publication History

            Published: 28 November 2023

            Author Tags

            1. chatbot
            2. explainable AI
            3. reinforcement learning
            4. service engineering
            5. service adaptation

            Qualifiers

            • Article

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)0
            • Downloads (Last 6 weeks)0
            Reflects downloads up to 20 Jan 2025

            Other Metrics

            Citations

            Cited By

            View all

            View Options

            View options

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media