skip to main content
10.1145/3397481.3450655acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein’s Theory of Explanation

Published: 14 April 2021 Publication History

Abstract

We propose a new method for explanations in Artificial Intelligence (AI) and a tool to test its expressive power within a user interface. In order to bridge the gap between philosophy and human-computer interfaces, we show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms for structuring natural language documents into knowledge graphs, answering questions effectively and satisfactorily. Among the mainstream philosophical theories of explanation we identified one that in our view is more easily applicable as a practical model for user-centric tools: Achinstein’s Theory of Explanation. With this work we aim to prove that the theory proposed by Achinstein can be actually adapted for being implemented into a concrete software application, as an interactive process answering questions. To this end we found a way to handle the generic (archetypal) questions that implicitly characterise an explanatory processes as preliminary overviews rather than as answers to explicit questions, as commonly understood. To show the expressive power of this approach we designed and implemented a pipeline of AI algorithms for the generation of interactive explanations under the form of overviews, focusing on this aspect of explanations rather than on existing interfaces and presentation logic layers for question answering. Accordingly, through the identification of a minimal set of archetypal questions it is possible to create a generator of explanatory overviews that is generic enough to significantly ease the acquisition of knowledge by humans, regardless of the specificities of the users outside of a minimum set of very broad requirements (e.g. people able to read and understand English and capable of performing basic common-sense reasoning). We tested our hypothesis on a well-known XAI-powered credit approval system by IBM, comparing CEM, a static explanatory tool for post-hoc explanations, with an extension we developed adding interactive explanations based on our model. The results of the user study, involving more than 100 participants, showed that our proposed solution produced a statistically relevant improvement on effectiveness (U=931.0, p=0.036) over the baseline, thus giving evidence in favour of our theory.

References

[1]
Peter Achinstein. 1983. The nature of explanation. Oxford University Press on Demand.
[2]
Peter Achinstein. 2010. Evidence, explanation, and realism: Essays in philosophy of science. Oxford University Press.
[3]
John Brooke. 2013. SUS: a retrospective. Journal of usability studies 8, 2 (2013), 29–40.
[4]
Carrie J Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 258–262.
[5]
Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, and Wei Wang. 2019. KBQA: learning question answering over QA corpora and knowledge bases. arXiv preprint arXiv:1903.02419(2019).
[6]
Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Advances in neural information processing systems. 592–603.
[7]
Cecilia Di Sciascio, Vedran Sabol, and Eduardo E Veas. 2016. Rank as you go: User-driven exploration of search results. In Proceedings of the 21st international conference on intelligent user interfaces. 118–129.
[8]
Igor Douven. 2012. Peter Achinstein: Evidence, Explanation, and Realism: Essays in Philosophy of Science.
[9]
International Organization for Standardization. 2010. Ergonomics of human-system interaction: Part 210: Human-centred design for interactive systems. ISO.
[10]
W Nelson Francis and Henry Kucera. 1979. Brown corpus manual. Letters to the Editor 5, 2 (1979), 7.
[11]
Erik Frøkjær, Morten Hertzum, and Kasper Hornbæk. 2000. Measuring usability: are effectiveness, efficiency, and satisfaction really correlated?. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 345–352.
[12]
Bernhard Ganter and Rudolf Wille. 2012. Formal concept analysis: mathematical foundations. Springer Science & Business Media.
[13]
Carl G Hempel 1965. Aspects of scientific explanation. (1965).
[14]
John H Holland, Keith J Holyoak, Richard E Nisbett, and Paul R Thagard. 1989. Induction: Processes of inference, learning, and discovery. MIT press.
[15]
Steffen Holter, Oscar Gomez, and Enrico Bertini. [n.d.]. FICO Explainable Machine Learning Challenge. ([n. d.]).
[16]
Kasper Hornbæk. 2006. Current practice in measuring usability: Challenges to usability studies and research. International journal of human-computer studies 64, 2 (2006), 79–102.
[17]
IBM. 2019. AI Explainability 360 - Demo. https://aix360.mybluemix.net/explanation_cust. Online; accessed 29-Mar-2020.
[18]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized explanations for hybrid recommender systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 379–390.
[19]
Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. 2019. A grounded interaction protocol for explainable artificial intelligence. arXiv preprint arXiv:1903.02409(2019).
[20]
GR Mayes. 2005. Theories of Explanation. The Internet Encyclopedia of Philosophy.
[21]
Jakob Nielsen. 2012. User satisfaction vs. performance metrics. Nielsen Norman Group(2012).
[22]
Stefan Palan and Christian Schitter. 2018. Prolific. ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance 17 (2018), 22–27.
[23]
Pearl Pu and Li Chen. 2006. Trust building with explanation interfaces. In Proceedings of the 11th international conference on Intelligent user interfaces. 93–100.
[24]
Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA: Language-agnostic answer retrieval from a multilingual pool. arXiv preprint arXiv:2004.05484(2020).
[25]
Wesley C Salmon. 1984. Scientific explanation and the causal structure of the world. Princeton University Press.
[26]
Wilfrid Sellars. 1963. Philosophy and the scientific image of man. Science, perception and reality 2 (1963), 35–78.
[27]
Francesco Sovrano, Monica Palmirani, and Fabio Vitali. 2020. Legal Knowledge Extraction for Knowledge Graph Based Question-Answering. In Legal Knowledge and Information Systems: JURIX 2020. The Thirty-third Annual Conference, Vol. 334. IOS Press, 143–153.
[28]
Francesco Sovrano, Fabio Vitali, and Monica Palmirani. 2020. Modelling GDPR-Compliant Explanations for Trustworthy AI. In International Conference on Electronic Government and the Information Systems Perspective. Springer, 219–233.
[29]
Bas C Van Fraassen 1980. The scientific image. Oxford University Press.
[30]
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, 2019. Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771(2019).
[31]
Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, 2019. Multilingual universal sentence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307(2019).
[32]
Weiguo Zheng, Hong Cheng, Jeffrey Xu Yu, Lei Zou, and Kangfei Zhao. 2019. Interactive natural language question answering over knowledge graphs. Information Sciences 481(2019), 141–159.
[33]
Lei Zou, Ruizhe Huang, Haixun Wang, Jeffrey Xu Yu, Wenqiang He, and Dongyan Zhao. 2014. Natural language question answering over RDF: a graph data driven approach. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data. 313–324.

Cited By

View all

Index Terms

  1. From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein’s Theory of Explanation
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        IUI '21: Proceedings of the 26th International Conference on Intelligent User Interfaces
        April 2021
        618 pages
        ISBN:9781450380171
        DOI:10.1145/3397481
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 14 April 2021

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Education and learning-related technologies
        2. ExplanatorY Artificial Intelligence (YAI)
        3. Methods for explanations

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        IUI '21
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 746 of 2,811 submissions, 27%

        Upcoming Conference

        IUI '25

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)71
        • Downloads (Last 6 weeks)4
        Reflects downloads up to 03 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media