skip to main content
10.1145/3555776.3577771acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
poster

ComplAI: Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models

Published: 07 June 2023 Publication History

Abstract

In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model's behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised ML models from an overall responsibility perspective. It helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model and (c) compare different models from an overall perspective thereby facilitating actionable recourse for improvement of the models. ComplAI is model agnostic and works with different supervised machine learning scenarios and frameworks, seamlessly integrable with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems. The theory version of the paper and a demo version is available in this link (use the Zip Password: Welcome2022!). An detailed version of the paper is available in Arxiv.

References

[1]
Black, E., Yeom, S., and Fredrikson, M. (2020). Fliptest: fairness testing via optimal transport. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 111--121.
[2]
Brughmans, D. and Martens, D. (2021). Nice: An algorithm for nearest instance counterfactual explanations. arXiv preprint arXiv:2104.07411.
[3]
Carlini, N. and Wagner, D. (2017). Towards evaluating the robustness of neural networks. In 2017 IEE symposium on security and privacy (SP).
[4]
Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., and Venkatasubramanian, S. (2015). Certifying and removing disparate impact. In ACM SIGKDD 2015.
[5]
Fernandez, C., Provost, F. J., and Han, X. (2020). Explaining data-driven decisions made by ai systems: The counterfactual approach. ArXiv, abs/2001.07417.
[6]
Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., and Giannotti, F. (2018). Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820.
[7]
Hardt, M., Chen, X., Cheng, X., Donini, M., Gelman, J., Gollaprolu, S., He, J., Larroy, P., Liu, X., McCarthy, N., Rathi, A., Rees, S., Siva, A., Tsai, E., Vasist, K., Yilmaz, P., Zafar, M. B., Das, S., Haas, K., Hill, T., and Kenthapadi, K. (2021). Amazon sagemaker clarify: Machine learning bias detection and explainability in the cloud. In ACM SIGKDD 2021.
[8]
Lundberg, S., Erion, G., Chen, H., DeGrave, A., Prutkin, J., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. (2020). From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2.
[9]
Martens, D. and Provost, F. (2014). Explaining data-driven document classifications. MIS Q., 38(1):73--100.
[10]
Ramon, Y., Martens, D., Provost, F. J., and Evgeniou, T. (2020). A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: Sedc, lime-c and shap-c. Advances in Data Analysis and Classification, 14:801--819.
[11]
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). "Why should I trust you?": Explaining the predictions of any classifier. In ACM SIGKDD 2016.
[12]
Russell, C. (2019). Efficient search for diverse coherent explanations. In FAT* '19.
[13]
Sharma, S., Henderson, J., and Ghosh, J. (2020). CERTIFAI: A common framework to provide explanations and analyse the fairness and robustness of black-box models. In AAAI/ACM AIES 2020.
[14]
Ustun, B., Spangher, A., and Liu, Y. (2019). Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19.
[15]
Wachter, S., Mittelstadt, B., and Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841.
[16]
Weng, T.-W., Zhang, H., Chen, P.-Y., Yi, J., Su, D., Gao, Y., Hsieh, C.-J., and Daniel, L. (2018). Evaluating the robustness of neural networks: An extreme value theory approach. In International Conference on Learning Representations (ICLR).
[17]
Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., and Wilson, J. (2019). The What-If tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics, 26(1):56--65.

Index Terms

  1. ComplAI: Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SAC '23: Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing
      March 2023
      1932 pages
      ISBN:9781450395175
      DOI:10.1145/3555776
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 June 2023

      Check for updates

      Author Tags

      1. explainable AI
      2. fairness
      3. transparency
      4. explainability
      5. machine learning
      6. model validation
      7. model analysis
      8. responsible AI

      Qualifiers

      • Poster

      Conference

      SAC '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,650 of 6,669 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 71
        Total Downloads
      • Downloads (Last 12 months)59
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 15 Sep 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media