skip to main content
10.1145/3530190.3534802acmconferencesArticle/Chapter ViewAbstractPublication PagescompassConference Proceedingsconference-collections
research-article
Public Access

Making AI Explainable in the Global South: A Systematic Review

Published: 29 June 2022 Publication History

Abstract

Artificial intelligence (AI) and machine learning (ML) are quickly becoming pervasive in ways that impact the lives of all humans across the globe. In an effort to make otherwise ”black box” AI/ML systems more understandable, the field of Explainable AI (XAI) has arisen with the goal of developing algorithms, toolkits, frameworks, and other techniques that enable people to comprehend, trust, and manage AI systems. However, although XAI is a rapidly growing area of research, most of the work has focused on contexts in the Global North, and little is known about if or how XAI techniques have been designed, deployed, or tested with communities in the Global South. This gap is concerning, especially in light of rapidly growing enthusiasm from governments, companies, and academics to use AI/ML to “solve” problems in the Global South. Our paper contributes the first systematic review of XAI research in the Global South, providing an early look at emerging work in the space. We identified 16 papers from 15 different venues that targeted a wide range of application domains. All of the papers were published in the last three years. Of the 16 papers, 13 focused on applying a technical XAI method, all of which involved the use of (at least some) data that was local to the context. However, only three papers engaged with or involved humans in the work, and only one attempted to deploy their XAI system with target users. We close by reflecting on the current state of XAI research in the Global South, discussing data and model considerations for building and deploying XAI systems in these regions, and highlighting the need for human-centered approaches to XAI in the Global South.

Supplementary Material

MP4 File (COMPASS_Paper_Session7_OkoloC_2022-07-01.mp4)
Conference Presentation Recording 2022-07-01

References

[1]
Rediet Abebe, Kehinde Aruleba, Abeba Birhane, Sara Kingsley, George Obaido, Sekou L Remy, and Swathi Sadagopan. 2021. Narratives and Counternarratives on Data Sharing in Africa. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 329–341.
[2]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6(2018), 52138–52160.
[3]
Ajay Agarwal and Basant Agarwal. 2021. Tracking Peaceful Tractors on Social Media–XAI-enabled analysis of Red Fort Riots 2021. arXiv preprint arXiv:2104.13352(2021).
[4]
Qingyao Ai, Yongfeng Zhang, Keping Bi, Xu Chen, and W Bruce Croft. 2017. Learning a hierarchical embedding model for personalized product search. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. 645–654.
[5]
Emily Aiken, Suzanne Bellue, Dean Karlan, Christopher R Udry, and Joshua Blumenstock. 2021. Machine learning and mobile phone data can improve the targeting of humanitarian assistance. Technical Report. National Bureau of Economic Research.
[6]
Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, and Pieter-Jan Kindermans. 2019. iNNvestigate neural networks!J. Mach. Learn. Res. 20, 93 (2019), 1–8.
[7]
Kiana Alikhademi, Brianna Richardson, Emma Drobina, and Juan E Gilbert. 2021. Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI. arXiv preprint arXiv:2106.07483(2021).
[8]
Amazon. 2020. Model Explainability - Amazon SageMaker. Retrieved October 19, 2021 from https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-explainability.html
[9]
Yaw Anokwa, Thomas N Smyth, Divya Ramachandran, Jahanzeb Sherwani, Yael Schwartzman, Rowena Luk, Melissa Ho, Neema Moraveji, and Brian DeRenzi. 2009. Stories from the field: Reflections on HCI4D experiences. Information Technologies & International Development 5, 4(2009), pp–101.
[10]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115.
[11]
Chinmayi Arun. 2019. AI and the Global South: Designing for Other Worlds. (2019).
[12]
Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilovic, 2020. AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models.J. Mach. Learn. Res. 21, 130 (2020), 1–6.
[13]
Raghav Awasthi, Prachi Patel, Vineet Joshi, Shama Karkal, and Tavpritesh Sethi. 2020. Learning explainable interventions to mitigate hiv transmission in sex workers across five states in india. arXiv preprint arXiv:2012.01930(2020).
[14]
Jeffrey Bardzell and Shaowen Bardzell. 2015. Humanistic hci. Synthesis Lectures on Human-Centered Informatics 8, 4(2015), 1–185.
[15]
Emma Beede, Elizabeth Baylor, Fred Hersch, Anna Iurchenko, Lauren Wilcox, Paisan Ruamviboonsuk, and Laura M Vardoulakis. 2020. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–12.
[16]
Valentina Bellemo, Zhan W Lim, Gilbert Lim, Quang D Nguyen, Yuchen Xie, Michelle YT Yip, Haslina Hamzah, Jinyi Ho, Xin Q Lee, Wynne Hsu, 2019. Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: a clinical validation study. The Lancet Digital Health 1, 1 (2019), e35–e44.
[17]
Carlos Eduardo Beluzo, Everton Silva, Luciana Correia Alves, Rodrigo Campos Bresan, Natália Martins Arruda, Ricardo Sovat, and Tiago Carvalho. 2020. Towards neonatal mortality risk classification: A data-driven approach using neonatal, maternal, and social factors. Informatics in medicine unlocked 20 (2020), 100398.
[18]
Yoshua Bengio. 2020. Time to rethink the publication process in machine learning. Retrieved January 21, 2021 from https://yoshuabengio.org/2020/02/26/time-to-rethink-the-publication-process-in-machine-learning/
[19]
Ruha Benjamin. 2019. Race after technology: Abolitionist tools for the new jim code. Social forces (2019).
[20]
Przemysław Biecek. 2018. DALEX: explainers for complex predictive models in R. The Journal of Machine Learning Research 19, 1 (2018), 3245–3249.
[21]
Emma Brunskill, Sunil Garg, Clint Tseng, Joyojeet Pal, and Leah Findlater. 2010. Evaluating an adaptive multi-user educational tool for low-resource environments. In Proceedings of the IEEE/ACM International Conference on Information and Communication Technologies and Development. 13–16.
[22]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77–91.
[23]
Jenna Burrell and Kentaro Toyama. 2009. What constitutes good ICTD research?Information Technologies & International Development 5, 3(2009), pp–82.
[24]
Ruth MJ Byrne. 2019. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning. In IJCAI. 6276–6282.
[25]
William Caicedo-Torres, Ángel Paternina, and Hernando Pinzón. 2016. Machine learning models for early dengue severity prediction. In Ibero-American Conference on Artificial Intelligence. Springer, 247–258.
[26]
Mary Carman and Benjamin Rosman. 2021. Applying a principle of explicability to AI research in Africa: should we do it?Ethics and Information Technology 23, 2 (2021), 107–117.
[27]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.
[28]
Lu Cheng, Kush R Varshney, and Huan Liu. 2021. Socially responsible AI algorithms: issues, purposes, and challenges. Journal of Artificial Intelligence Research 71 (2021), 1137–1181.
[29]
Roberto Confalonieri, Ludovik Coba, Benedikt Wagner, and Tarek R Besold. 2021. A historical perspective of explainable Artificial Intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 11, 1(2021), e1391.
[30]
Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo, and Luciano Floridi. 2021. A definition, benchmark and database of AI for social good initiatives. Nature Machine Intelligence 3, 2 (2021), 111–115.
[31]
Nicola Dell and Neha Kumar. 2016. The ins and outs of HCI for development. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2220–2232.
[32]
Skater developers and contributors. 2017. Skater. Retrieved October 19, 2021 from https://github.com/oracle/Skater
[33]
Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. arXiv preprint arXiv:1802.07623(2018).
[34]
Charles-Olivier Dufresne-Camaro, Fanny Chevalier, and Syed Ishtiaque Ahmed. 2020. Computer vision applications and their ethical risks in the global south. (2020).
[35]
Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449–466.
[36]
Upol Ehsan and Mark O Riedl. 2021. Explainability Pitfalls: Beyond Dark Patterns in Explainable AI. arXiv preprint arXiv:2109.12480(2021).
[37]
Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O Riedl. 2019. Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 263–274.
[38]
Upol Ehsan, Philipp Wintersberger, Q Vera Liao, Martina Mara, Marc Streit, Sandra Wachter, Andreas Riener, and Mark O Riedl. 2021. Operationalizing human-centered perspectives in explainable AI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–6.
[39]
Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
[40]
Juliana Jansen Ferreira and Mateus de Souza Monteiro. 2020. Do ML Experts Discuss Explainability for AI Systems? A discussion case in the industry for a domain-specific solution. arXiv preprint arXiv:2002.12450(2020).
[41]
David Barrera Ferro, Sally Brailsford, Cristián Bravo, and Honora Smith. 2020. Improving healthcare access management by predicting patient no-show behaviour. Decision Support Systems 138 (2020), 113398.
[42]
The Institute for Ethical AI & ML. 2021. XAI - An eXplainability toolbox for machine learning. Retrieved October 21, 2021 from https://github.com/EthicalML/xai
[43]
Rob Geada, Tommaso Teofili, Rui Vieira, Rebecca Whitworth, and Daniele Zonca. 2021. TrustyAI Explainability Toolkit. arXiv preprint arXiv:2104.12717(2021).
[44]
Julie Gerlings, Millie Søndergaard Jensen, and Arisa Shollo. 2022. Explainable AI, But Explainable to Whom? An Exploratory Case Study of xAI in Healthcare. In Handbook of Artificial Intelligence in Healthcare. Springer, 169–198.
[45]
Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3(2021), 1–28.
[46]
Indranil Ghosh and Manas K Sanyal. 2021. Introspecting predictability of market fear in Indian context during COVID-19 pandemic: An integrated approach of applied predictive modelling and explainable AI. International Journal of Information Management Data Insights (2021), 100039.
[47]
Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80–89.
[48]
Google. 2017. AI Experiments - Experiments with Google. Retrieved October 19, 2021 from https://experiments.withgoogle.com/collection/ai
[49]
Google. 2020. Explainable AI | Google Cloud. Retrieved October 19, 2021 from https://cloud.google.com/explainable-ai
[50]
Google. 2021. Responsible AI Toolkit | TensorFlow. Retrieved October 19, 2021 from https://www.tensorflow.org/responsible_ai
[51]
Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2019. Factual and counterfactual explanations for black box decision making. IEEE Intelligent Systems 34, 6 (2019), 14–23.
[52]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.
[53]
Arthur Gwagwa, Erika Kraemer-Mbula, Nagla Rizk, Isaac Rutenberg, and Jeremy De Beer. 2020. Artificial intelligence (AI) deployments in Africa: benefits, challenges and policy dimensions. The African Journal of Information and Communication 26 (2020), 1–28.
[54]
Sungwon Han, Donghyun Ahn, Sungwon Park, Jeasurk Yang, Susang Lee, Jihee Kim, Hyunjoo Yang, Sangyoon Park, and Meeyoung Cha. 2020. Learning to score economic development from satellite imagery. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2970–2979.
[55]
Yen-Chia Hsu, Ting-Hao’Kenneth’ Huang, Himanshu Verma, Andrea Mauri, Illah Nourbakhsh, and Alessandro Bozzon. 2021. Empowering Local Communities Using Artificial Intelligence. arXiv preprint arXiv:2110.02007(2021).
[56]
Xia Hu, Lingyang Chu, Jian Pei, Weiqing Liu, and Jiang Bian. 2021. Model complexity of deep learning: A survey. Knowledge and Information Systems 63, 10 (2021), 2585–2619.
[57]
Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. 2021. Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 560–575.
[58]
IBM. 2020. Explainable AI. Retrieved March 31, 2021 from https://www.ibm.com/watson/explainable-ai
[59]
Deep Learning Indaba. 2021. Deep Learning Indaba. Retrieved October 21, 2021 from https://deeplearningindaba.com/2021/
[60]
Lucas Introna and David Wood. 2004. Picturing algorithmic surveillance: The politics of facial recognition systems. Surveillance & Society 2, 2/3 (2004), 177–198.
[61]
Tupokigwe Isagah and Albogast Musabila. 2020. Recommendations for artificial intelligence implementation in African governments: results from researchers and practitioners of AI/ML. In Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance. 82–89.
[62]
Azra Ismail and Neha Kumar. 2018. Engaging solidarity in data collection practices for community health. Proceedings of the ACM on Human-Computer Interaction 2, CSCW(2018), 1–24.
[63]
Azra Ismail and Neha Kumar. 2021. AI in Global Health: The View from the Front Lines. (2021).
[64]
Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, and Yoav Goldberg. 2021. Contrastive Explanations for Model Interpretability. arXiv preprint arXiv:2103.01378(2021).
[65]
Bhanu Jain, Manfred Huber, Leonidas Fegaras, and Ramez A Elmasri. 2019. Singular race models: addressing bias and accuracy in predicting prisoner recidivism. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. 599–607.
[66]
Kuhu Joshi and Chaitanya K Joshi. 2019. Working women and caste in India: A study of social disadvantage using feature attribution. arXiv preprint arXiv:1905.03092(2019).
[67]
Hyungsik Jung and Youngrock Oh. 2021. Towards Better Explanations of Class Activation Mapping. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1336–1344.
[68]
Ashish Kapoor, Nathan Eagle, and Eric Horvitz. 2010. People, quakes, and communications: Inferences from call dynamics about a seismic event and its influences on a population. In 2010 AAAI Spring Symposium Series.
[69]
Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Bintz, Daniella Raz, and PM Krafft. 2020. Toward situated interventions for algorithmic equity: lessons from the field. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 45–55.
[70]
Mark T Keane and Barry Smyth. 2020. Good counterfactuals and where to find them: A case-based technique for generating counterfactuals for explainable AI (XAI). In International Conference on Case-Based Reasoning. Springer, 163–178.
[71]
Niklas Keller, Mirjam A Jenny, Claudia A Spies, and Stefan M Herzog. 2020. Augmenting Decision Competence in Healthcare Using AI-based Cognitive Models. In 2020 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, 1–4.
[72]
Hannah Kerner. 2020. Too many AI researchers think real-world problems are not relevant. Retrieved January 21, 2022 from https://www.technologyreview.com/2020/08/18/1007196/ai-research-machine-learning-applications-problems-opinion/
[73]
Os Keyes, Josephine Hoy, and Margaret Drouhard. 2019. Human-computer insurrection: Notes on an anarchist HCI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–13.
[74]
Masoud Bakhtyari Kia, Saied Pirasteh, Biswajeet Pradhan, Ahmad Rodzi Mahmud, Wan Nor Azmin Sulaiman, and Abbas Moradi. 2012. An artificial neural network model for flood simulation using GIS: Johor River Basin, Malaysia. Environmental earth sciences 67, 1 (2012), 251–264.
[75]
Janis Klaise, Arnaud Van Looveren, Giovanni Vacanti, and Alexandru Coca. 2021. Alibi Explain: algorithms for explaining machine learning models. Journal of Machine Learning Research 22, 181 (2021), 1–7.
[76]
Akhil Alfons Kodiyan. 2019. An overview of ethical issues in using AI systems in hiring with a case study of Amazon’s AI based hiring tool. Researchgate Preprint(2019).
[77]
Tino Kreutzer, Patrick Vinck, Phuong N Pham, Aijun An, Lora Appel, Eric DeLuca, Grace Tang, Muath Alzghool, Kusum Hachhethu, Bobi Morris, 2019. Improving humanitarian needs assessments through natural language processing. IBM Journal of Research and Development 64, 1/2 (2019), 9–1.
[78]
Nicolas Lainez. 2021. The Prospects and Dangers of Algorithmic Credit Scoring in Vietnam: Regulating a Legal Blindspot. (2021).
[79]
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2019. Faithful and customizable explanations of black box models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 131–138.
[80]
Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from explainable artificial intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473.
[81]
Chiara Ledesma, Oshean Lee Garonita, Lorenzo Jaime Flores, Isabelle Tingzon, and Danielle Dalisay. 2020. Interpretable Poverty Mapping using Social Media Data, Satellite Images, and Geospatial Information. arXiv preprint arXiv:2011.13563(2020).
[82]
Kwang Hee Lee, Chaewon Park, Junghyun Oh, and Nojun Kwak. 2021. LFI-CAM: Learning Feature Importance for Better Visual Explanation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1355–1363.
[83]
Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[84]
Q Vera Liao and Kush R Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790(2021).
[85]
Xuan Liu, Xiaoguang Wang, and Stan Matwin. 2018. Improving the interpretability of deep neural networks with knowledge distillation. In 2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 905–912.
[86]
Luca Longo, Randy Goebel, Freddy Lecue, Peter Kieseberg, and Andreas Holzinger. 2020. Explainable artificial intelligence: Concepts, applications, research challenges and visions. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction. Springer, 1–16.
[87]
Loyani K Loyani, Karen Bradshaw, and Dina Machuve. 2021. Segmentation of Tuta Absoluta’s Damage on Tomato Plants: A Computer Vision Approach. Applied Artificial Intelligence 35, 14 (2021), 1107–1127.
[88]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st international conference on neural information processing systems. 4768–4777.
[89]
Hughie Mackay. 1992. From computer literacy to technology literacy. Technological literacy and the curriculum(1992), 125–147.
[90]
Guy Merchant. 2007. Writing the future in the digital age. Literacy 41, 3 (2007), 118–128.
[91]
Mvurya Mgala and Audrey Mbogho. 2015. Data-driven intervention-level prediction modeling for academic performance. In Proceedings of the Seventh International Conference on Information and Communication Technologies and Development. 1–8.
[92]
Microsoft. 2019. Model interpretability in Azure Machine Learning. Retrieved October 19, 2021 from https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability#interpretability-with-azure-machine-learning
[93]
Mona Minakshi, Pratool Bharti, Willie B McClinton III, Jamshidbek Mirzakhalov, Ryan M Carney, and Sriram Chellappan. 2020. Automating the surveillance of mosquito vectors from trapped specimens using computer vision techniques. In Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies. 105–115.
[94]
Marius Miron, Songül Tolan, Emilia Gómez, and Carlos Castillo. 2021. Evaluating causes of algorithmic bias in juvenile criminal recidivism. Artificial Intelligence and Law 29, 2 (2021), 111–147.
[95]
David Richard Moore. 2011. Technology literacy: The extension of cognition. International Journal of Technology and Design Education 21, 2(2011), 185–193.
[96]
Mário WL Moreira, Joel JPC Rodrigues, Francisco HC Carvalho, Naveen Chilamkurti, Jalal Al-Muhtadi, and Victor Denisov. 2019. Biomedical data analytics in mobile-health environments for high-risk pregnancy outcome prediction. Journal of Ambient Intelligence and Humanized Computing 10, 10(2019), 4121–4134.
[97]
Satya M Muddamsetty, NS Jahromi Mohammad, and Thomas B Moeslund. 2020. Sidu: Similarity difference and uniqueness method for explainable ai. In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 3269–3273.
[98]
R Muhamedyev, K Yakunin, YA Kuchin, A Symagulov, T Buldybayev, S Murzakhmetov, and A Abdurazakov. 2020. The use of machine learning “black boxes” explanation systems to improve the quality of school education. Cogent Engineering 7, 1 (2020), 1769349.
[99]
T Nathan Mundhenk, Barry Y Chen, and Gerald Friedland. 2019. Efficient saliency maps for Explainable AI. arXiv preprint arXiv:1911.11293(2019).
[100]
Charles K Mutai, Patrick E McSharry, Innocent Ngaruye, and Edouard Musabanganji. 2021. Use of machine learning techniques to identify HIV predictors for screening in sub-Saharan Africa. BMC medical research methodology 21, 1 (2021), 1–11.
[101]
Mobeen Nazar, Muhammad Mansoor Alam, Eiad Yafi, and MS Mazliham. 2021. A Systematic Review of Human-Computer Interaction and Explainable Artificial Intelligence in Healthcare with Artificial Intelligence Techniques. IEEE Access (2021).
[102]
Ye Ni, Xutao Li, Yunming Ye, Yan Li, Chunshan Li, and Dianhui Chu. 2020. An investigation on deep learning approaches to combining nighttime and daytime satellite imagery for poverty prediction. IEEE Geoscience and Remote Sensing Letters(2020).
[103]
Data Science Nigeria. 2019. Annual Report July 2018–June 2019: Building One Million AI Talents in 10 Years. Retrieved October 21, 2021 from https://www.datasciencenigeria.org/wp-content/uploads/2019/08/annual-report-final.pdf
[104]
Safiya Umoja Noble. 2018. Algorithms of oppression. New York University Press.
[105]
Harsha Nori, Samuel Jenkins, Paul Koch, and Rich Caruana. 2019. Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223(2019).
[106]
Alejandro Noriega-Campero, Bernardo Garcia-Bulle, Luis Fernando Cantu, Michiel A Bakker, Luis Tejerina, and Alex Pentland. 2020. Algorithmic targeting of social policies: fairness, accuracy, and distributed governance. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 241–251.
[107]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
[108]
Chinasa T. Okolo, Srujana Kamath, Nicola Dell, and Aditya Vashistha. 2021. ” It cannot do all of my work”: Community Health Worker Perceptions of AI-Enabled Mobile Health Applications in Rural India. (2021).
[109]
Iroro Orife, Julia Kreutzer, Blessing Sibanda, Daniel Whitenack, Kathleen Siminyu, Laura Martinus, Jamiil Toure Ali, Jade Abbott, Vukosi Marivate, Salomon Kabongo, 2020. Masakhane–Machine Translation For Africa. arXiv preprint arXiv:2003.11529(2020).
[110]
Shashwat Pathak and Basant Kumar. 2016. A robust automated cataract detection algorithm using diagnostic opinion based parameter thresholding for telemedicine application. Electronics 5, 3 (2016), 57.
[111]
Rabin Patra, Joyojeet Pal, and Sergiu Nedevschi. 2009. ICTD state of the union: Where have we reached and where are we headed. In 2009 International Conference on Information and Communication Technologies and Development (ICTD). IEEE, 357–366.
[112]
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. 2021. Data and its (dis) contents: A survey of dataset development and use in machine learning research. Patterns 2, 11 (2021), 100336.
[113]
Chris Percy, Simo Dragicevic, Sanjoy Sarkar, and Artur S Garcez. 2021. Accountability in AI: From Principles to Industry-specific Accreditation. arXiv preprint arXiv:2110.09232(2021).
[114]
Fahad Pervaiz, Aditya Vashistha, and Richard Anderson. 2019. Examining the challenges in development data pipeline. In Proceedings of the 2nd ACM SIGCAS Conference on Computing and Sustainable Societies. 13–21.
[115]
Thomas Ploug and Søren Holm. 2020. The four dimensions of contestable AI diagnostics-A patient-centric approach to explainable AI. Artificial Intelligence in Medicine 107 (2020), 101901.
[116]
Alex Polacco and Kayla Backes. 2018. The amazon go concept: Implications, applications, and sustainability. Journal of Business and Management 24, 1 (2018), 79–92.
[117]
Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, and Supriyo Chakraborty. 2018. Stakeholders in explainable AI. arXiv preprint arXiv:1810.00184(2018).
[118]
Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021. Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–23.
[119]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[120]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
[121]
Eugenia Arrieta Rodríguez, Francisco Edna Estrada, William Caicedo Torres, and Juan Carlos Martínez Santos. 2016. Early prediction of severe maternal morbidity using machine learning techniques. In Ibero-American Conference on Artificial Intelligence. Springer, 259–270.
[122]
Casey Ross. 2022. Epic’s AI algorithms, shielded from scrutiny by a corporate firewall, are delivering inaccurate information on seriously ill patients. STAT (2022). https://www.statnews.com/2021/07/26/epic-hospital-algorithms-sepsis-investigation/
[123]
Mark Ryan, Josephina Antoniou, Laurence Brooks, Tilimbe Jiya, Kevin Macnish, and Bernd Stahl. 2020. The ethical balance of using smart information systems for promoting the United Nations’ Sustainable Development Goals. Sustainability 12, 12 (2020), 4826.
[124]
Mazrura Sahani and Zainudin Mohd Ali. 2017. Feature selection algorithms for Malaysian dengue outbreak detection model. Sains Malaysiana 46, 2 (2017), 255–265.
[125]
Shravan Sajja, Nupur Aggarwal, Sumanta Mukherjee, Kushagra Manglik, Satyam Dwivedi, and Vikas Raykar. 2021. Explainable AI based Interventions for Pre-season Decision Making in Fashion Retail. In 8th ACM IKDD CODS and 26th COMAD. 281–289.
[126]
Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021. Re-imagining Algorithmic Fairness in India and Beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 315–328.
[127]
Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. In proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[128]
Wojciech Samek and Klaus-Robert Müller. 2019. Towards explainable artificial intelligence. In Explainable AI: interpreting, explaining and visualizing deep learning. Springer, 5–22.
[129]
Wojciech Samek, Thomas Wiegand, and Klaus-Robert Müller. 2017. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296(2017).
[130]
Sophia L Sanga, Dina Machuve, and Kennedy Jomanga. 2020. Mobile-based deep learning models for Banana disease detection. Engineering, Technology & Applied Science Research 10, 3 (2020), 5674–5677.
[131]
Morgan Klaus Scheuerman, Alex Hanna, and Emily Denton. 2021. Do datasets have politics? Disciplinary values in computer vision dataset development. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2(2021), 1–37.
[132]
Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D Sculley. 2017. No classification without representation: Assessing geodiversity issues in open data sets for the developing world. arXiv preprint arXiv:1711.08536(2017).
[133]
Hong Shen, Haojian Jin, Ángel Alexander Cabrera, Adam Perer, Haiyi Zhu, and Jason I Hong. 2020. Designing Alternative Representations of Confusion Matrices to Support Non-Expert Public Understanding of Algorithm Performance. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2(2020), 1–22.
[134]
Nathan Silberman, Kristy Ahrlich, Rob Fergus, and Lakshminarayanan Subramanian. 2010. Case for automated detection of diabetic retinopathy. In 2010 AAAI Spring Symposium Series.
[135]
Moninder Singh, Gevorg Ghalachyan, Kush R Varshney, and Reginald E Bryant. 2021. An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness. arXiv preprint arXiv:2109.14653(2021).
[136]
Daniel Smilkov and Shan Carter. 2017. Tensorflow-Neural Network Playground. Retrieved October 19, 2021 from https://playground.tensorflow.org/
[137]
Alexander Stevens, Peter Deruyck, Ziboud Van Veldhoven, and Jan Vanthienen. 2020. Explainability and Fairness in Machine Learning: Improve Fair End-to-end Lending for Kiva. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 1241–1248.
[138]
Yunjia Sun. 2016. Novice-Centric Visualizations for Machine Learning. Master’s thesis. University of Waterloo.
[139]
Yunjia Sun, Edward Lank, and Michael Terry. 2017. Label-and-Learn: Visualizing the Likelihood of Machine Learning Classifier’s Success During Data Labeling. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. 523–534.
[140]
Sarah Tan, Rich Caruana, Giles Hooker, Paul Koch, and Albert Gordo. 2018. Learning global additive explanations for neural nets using model distillation. (2018).
[141]
Mahyat Shafapour Tehrany, Biswajeet Pradhan, and Mustafa Neamah Jebur. 2014. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS. Journal of hydrology 512(2014), 332–343.
[142]
Divy Hasmukhbhai Thakkar, Azra Ismail, Pratyush Kumar, Alex Hanna, Nithya Sambasivan, and Neha Kumar. 2022. When is ML data good?: Valuing in Public Health Datafication. (2022).
[143]
Dieu Tien Bui, Biswajeet Pradhan, Owe Lofman, and Inge Revhaug. 2012. Landslide susceptibility assessment in vietnam using support vector machines, decision tree, and Naive Bayes Models. Mathematical problems in Engineering 2012 (2012).
[144]
Aditya Vashistha, Richard Anderson, and Shrirang Mare. 2018. Examining security and privacy research in developing regions. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies. 1–14.
[145]
Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The role of artificial intelligence in achieving the Sustainable Development Goals. Nature communications 11, 1 (2020), 1–10.
[146]
Kiri Wagstaff. 2012. Machine learning that matters. arXiv preprint arXiv:1206.4656(2012).
[147]
Kutoma Wakunuma, Tilimbe Jiya, and Suleiman Aliyu. 2020. Socio-ethical implications of using AI in accelerating SDG3 in Least Developed Countries. Journal of Responsible Technology 4 (2020), 100006.
[148]
Dakuo Wang, Liuping Wang, Zhan Zhang, Ding Wang, Haiyi Zhu, Yvonne Gao, Xiangmin Fan, and Feng Tian. 2021. “Brilliant AI Doctor” in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.
[149]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
[150]
Josephine Yam and Joshua August Skorburg. 2021. From human resources to human rights: Impact assessments for hiring algorithms. Ethics and Information Technology(2021), 1–13.
[151]
Cheryl Young, Stephen Barker, Rodney Ehrlich, Barry Kistnasamy, and Annalee Yassi. 2020. Computer-aided detection for tuberculosis and silicosis in chest radiographs of gold miners of South Africa. The International Journal of Tuberculosis and Lung Disease 24, 4(2020), 444–451.
[152]
Meg Young, Michael Katell, and PM Krafft. 2019. Municipal surveillance regulation and algorithmic accountability. Big Data & Society 6, 2 (2019), 2053951719868492.
[153]
Tai-Kuei Yu, Mei-Lan Lin, and Ying-Kai Liao. 2017. Understanding factors influencing information communication technology adoption behavior: The moderators of information literacy and digital skills. Computers in Human Behavior 71 (2017), 196–208.
[154]
Rida Zainab and Rajarathnam Chandramouli. 2020. Detecting and Explaining Depression in Social Media Text with Machine Learning. (2020).
[155]
Jichen Zhu, Antonios Liapis, Sebastian Risi, Rafael Bidarra, and G Michael Youngblood. 2018. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. In 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1–8.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
COMPASS '22: Proceedings of the 5th ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies
June 2022
710 pages
ISBN:9781450393478
DOI:10.1145/3530190
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 June 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Algorithmic Fairness
  2. Artificial Intelligence
  3. Explainability
  4. Global South
  5. HCI4D
  6. ICTD
  7. Machine Learning
  8. XAI4D

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

COMPASS '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 25 of 50 submissions, 50%

Upcoming Conference

COMPASS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,261
  • Downloads (Last 6 weeks)134
Reflects downloads up to 29 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media