skip to main content
10.1145/3520304.3534023acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article
Open access

Evolving explainable rule sets

Published: 19 July 2022 Publication History

Abstract

Most AI systems work like black boxes tasked with generating reasonable outputs for given inputs. Many domains, however, have explainablity and trustworthiness requirements not fulfilled by these approaches. Various methods exist to analyze or interpret black-box models post training. When it comes down to sensitive domains in which there is a mandate for white-box models, a better choice would be to use transparent models. In this work, we present a method which evolves explainable rule-sets using inherently transparent ordinary logic to make models. We showcase some sample domains we tackled and discuss their major desirable properties like bias detection, knowledge discovery, and modifiablity, to name a few.

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138--52160.
[2]
Vaishak Belle and Ioannis Papantonis. 2021. Principles and Practice of Explainable Machine Learning. Frontiers in Big Data 4 (2021).
[3]
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. CoRR abs/1606.01540 (2016).
[4]
Davide Chicco and Giuseppe Jurman. 2020. Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC medical informatics and decision making 20, 1 (2020), 1--16.
[5]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning.
[6]
Olivier Francon, Santiago Gonzalez, Babak Hodjat, Elliot Meyerson, Risto Miikkulainen, Xin Qiu, and Hormoz Shahrzad. 2020. Effective reinforcement learning through evolutionary surrogate-assisted prescription. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference. 814--822.
[7]
David Gunning and David Aha. 2019. DARPA's Explainable Artificial Intelligence (XAI) Program. AI Magazine 40, 2 (Jun. 2019), 44--58.
[8]
Erik Hemberg, Kalyan Veeramachaneni, Babak Hodjat, Prashan Wanigasekara, Hormoz Shahrzad, and Una-May O'Reilly. 2014. Learning Decision Lists with Lags for Physiological Time Series. In 3rd Workshop on Data Mining for Medicine and Healthcare, April 26, 2014, Philadelphia, PA, (held in conjunction with 14th SIAM International Conference on Data Mining). 82--87. http://www.dmmh.org/sdm14
[9]
Babak Hodjat, Hormoz Shahrzad, Risto Miikkulainen, Lawrence Murray, and Chris Holmes. 2018. PRETSL: Distributed Probabilistic Rule Evolution for Time-Series Classification. In Genetic Programming Theory and Practice XIV. Springer, 139--148.
[10]
Julie Jacques, Julien Taillard, David Delerue, Clarisse Dhaenens, and Laetitia Jourdan. 2015. Conception of a dominance-based multi-objective local search in the context of classification rule mining in large and imbalanced data sets. Applied Soft Computing 34 (2015), 705--720.
[11]
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2019. An evaluation of the human-interpretability of explanation. arXiv preprint arXiv: 1902.00006 (2019).
[12]
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2019. Faithful and Customizable Explanations of Black Box Models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES '19). Association for Computing Machinery, New York, NY, USA, 131--138.
[13]
Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2020. Explainable ai: A review of machine learning interpretability methods. Entropy 23, 1 (2020), 18.
[14]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
[15]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1--38.
[16]
Arnak Poghosyan, Ashot Harutyunyan, Naira Grigoryan, and Nicholas Kushmerick. 2021. Incident Management for Explainable and Automated Root Cause Analysis in Cloud Data Centers. Journal of Universal Computer Science 27, 11 (2021), 1152--1173.
[17]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD '16). Association for Computing Machinery, New York, NY, USA, 1135--1144.
[18]
Beata Strack, Jonathan Deshazo, Chris Gennings, Juan Luis Olmo Ortiz, Sebastian Ventura, Krzysztof Cios, and John Clore. 2014. Impact of HbA1c Measurement on Hospital Readmission Rates: Analysis of 70,000 Clinical Database Patient Records. BioMed research international 2014 (04 2014), 781670.
[19]
Norman Tasfi. 2016. PyGame Learning Environment. https://github.com/ntasfi/PyGame-Learning-Environment.
[20]
Erico Tjoa and Cuntai Guan. 2021. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems 32 (2021), 4793--4813.
[21]
Joseph Townsend, Thomas Chaton, and Joao M Monteiro. 2019. Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective. IEEE transactions on neural networks and learning systems 31, 9 (2019), 3456--3470.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference Companion
July 2022
2395 pages
ISBN:9781450392686
DOI:10.1145/3520304
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. XAI
  2. explainable AI
  3. genetic algorithms
  4. rule-set evolution

Qualifiers

  • Research-article

Conference

GECCO '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,669 of 4,410 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)151
  • Downloads (Last 6 weeks)15
Reflects downloads up to 06 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media