skip to main content
research-article
Open access

"Something Fast and Cheap" or "A Core Element of Building Trust"? - AI Auditing Professionals' Perspectives on Trust in AI

Published: 08 November 2024 Publication History

Abstract

Artificial Intelligence (AI) auditing is a relatively new area of work. Currently, there is a lack of uniform standards and regulation. As a result, the AI auditing ecosystem is very diverse, and AI auditing professionals use a variety of different auditing methods. So far, little is known about how AI auditors approach the concept of trust in AI through AI audits, in particular regarding the trust of users. This paper reports findings from interviews with 19 AI auditing stakeholders to understand how AI auditing professionals seek to create calibrated trust in AI tools and AI audits. Themes identified included the AI auditing ecosystem, participants' experiences with AI auditing, and trust in AI audits and AI. The paper adds to the existing research on trust in AI and trustworthiness in AI by adding perspectives of key stakeholders regarding trust in AI Audits by users as an essential and currently less explored part of the trust in AI research. This paper shows how information asymmetry in respect to AI audits can decrease the value of audits for users and consequently their trust in AI systems. Study participants suggest key elements for rebuilding trust and suggest recommendations for the AI auditing industry, such as monitoring of auditors and effective communication about AI audits.

References

[1]
Jack Bandy. 2021. Problematic machine behavior: A systematic literature review of algorithm audits. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1, 1--34.
[2]
Sean McGregor. 2021. Preventing repeated real world AI failures by cataloging incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
[3]
Alec Tyson and Emma Kikuchi. 2023. Growing public concern about the role of artificial intelligence in daily life. Pew Research Center, Washington, D.C. (August 28, 2023) https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/
[4]
Nicole Gillespie, Steven Lockey, Caitlin Curtis, Javad Pool, and Ali Akbari. 2023. Trust in Artificial Intelligence: A global study. The University of Queensland and KPMG Australia.
[5]
Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. 2022. Who audits the auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FaccT'22). Association for Computing Machinery, New York, NY, USA, 1571--1583
[6]
Ellen P. Goodman and Julia Trehu. 2023. Algorithmic auditing: Chasing AI accountability. 39 Santa Clara High Tech LJ, 289
[7]
Jakob Mökander, 2023. Auditing of AI: Legal, ethical and technical approaches. Digital Society 2, 49.
[8]
Inioluwa Deborah Raji, Peggy Xu, Colleen Honigsberg, and Daniel Ho. 2022. Outsider oversight: Designing a third party audit ecosystem for AI governance. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES '22). Association for Computing Machinery, New York, NY, USA, 557--571.
[9]
Matti Minkkinen, Anniina Niukkanen, and Matti Mäntymäki. 2022. What about investors? ESG analyses as tools for ethics-based AI auditing. AI & Society: 1--15.
[10]
Michelle S. Lam, Mitchell L. Gordon, Danaë Metaxa, Jeffrey T. Hancock, James A. Landay, and Michael S. Bernstein. 2022. End-user audits: A system empowering communities to lead large-scale investigations of harmful algorithmic behavior. Proceedings of the ACM on Human-Computer Interaction. 6, CSCW2, Article 512 (November 2022), 34 pages. https://doi.org/10.1145/3555625
[11]
Hong Shen, Alicia DeVos, Motahhare Eslami, and Kenneth Holstein. 2021. Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors. Proceedings of the ACM on Human-Computer Interaction. 5, CSCW2, Article 433 (October 2021), 29 pages. https://doi.org/10.1145/3479577
[12]
Matti Minkkinen, Joakim Laine, and Matti Mäntymäki. 2022. Continuous auditing of artificial intelligence: A conceptualization and assessment of tools and frameworks. Digital Society 1, 21.
[13]
Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. The 64th Annual Meeting of the International Communication Society.
[14]
Esther Duflo, Michael Greenstone, Rohini Pande, and Nicholas Ryan. 2013. Truth-telling by third-party auditors and the response of polluting firms: Experimental evidence from India. 128 The Quarterly Journal of Economics. 128, 1499--1545.
[15]
Wesley Hanwen Deng, Michelle S. Lam, Ángel Alexander Cabrera, Danaë Metaxa, Motahhare Eslami, and Kenneth Holstein. 2023. Supporting user engagement in testing, auditing, and contesting AI. In Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing, pp. 556--559.
[16]
Bogdana Rakova and Roel Dobbe. 2023. Algorithms as social-ecological-technological systems: an environmental justice lens on algorithmic audits. arXiv preprint arXiv:2305.05733.
[17]
Christo Wilson, Avijit Ghosh, Shan Jiang, Alan Mislove, Lewis Baker, Janelle Szary, Kelly Trindel, and Frida Polli. 2021. Building and auditing fair algorithms: A case study in candidate screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 666--677.
[18]
Alex C. Engler. 2021. Independent auditors are struggling to hold AI companies accountable. Fast Company, January 26, 2021. https://www.fastcompany.com/90597594/ai-algorithm-auditing-hirevue.
[19]
Eslam Hussein, Prerna Juneja, and Tanushree Mitra. 2020. Measuring misinformation in video search platforms: An audit study on YouTube. Proceedings of the ACM on Human-Computer Interaction. 4, CSCW1, Article 48 (May 2020), 27 pages. https://doi.org/10.1145/3392854
[20]
Nel Escher and Nikola Banovic. 2020. Exposing error in poverty management technology: A method for auditing government benefits screening tools. Proceedings of the ACM on Human-Computer Interaction, CSCW1, Article 64 (May 2020), 20 pages. https://doi.org/10.1145/3392874
[21]
Danaë Metaxa, Michelle A. Gan, Su Goh, Jeff Hancock, and James A. Landay. 2021. An image of society: Gender and racial representation and impact in image search results for occupations. Proceedings of the ACM on Human-Computer Interaction. 5, CSCW1, Article 26 (April 2021), 23 pages. https://doi.org/10.1145/3449100
[22]
Basileal Imana, Aleksandra Korolova, and John Heidemann. 2023. Having your privacy cake and eating it too: Platform-supported auditing of social media algorithms for public interest. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023): 1--33. https://doi.org/10.1145/3579610
[23]
The New York City Council. N.d. 'File #: Int 1894--2020. https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8--6596032FA3F9
[24]
European Commission. 2021. Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.
[25]
European Commission. 2020. Proposal for a regulation of the European Parliament and of the Council on the single market for digital services (Digital Services Act) and amending directive 2000/31/EC, doc. COM (2020) 825 final, 15 December 2020.
[26]
National Telecommunications and Information Administration (NTIA). 2024. AI Accountability Policy Report. https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report.
[27]
International Association of Algorithmic Auditors (IAAA). N.d. https://iaaa-algorithmicauditors.org/
[28]
Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad van Moorsel. 2020. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 272--283.
[29]
Cynthia L. Corritore, Beverly Kracher, and Susan Wiedenbeck, S. 2003. On-line trust: Concepts, evolving themes, a model. 58 Int J Hum Comput, 737--758. https://doi.org/10.1016/S1071--5819(03)00041--7
[30]
Paul C. Bauer. 2013. Clearing the jungle: Conceptualizing and measuring trust and trustworthiness. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2325989
[31]
Margit Sutrop. 2019. Should we trust artificial intelligence? Trames: A Journal of Humanities and Social Science, 23(4), 499. https://doi.org/10.3176/tr.2019.4.07
[32]
Ahmer Arif and Os Keyes. 2022. Vulnerability, trust and AI. In Proceedings of Workshop on Trust and Reliance in AI-Human Teams at CHI 2022 (TRAIT). 1--7.
[33]
Annette Baier. 1986. Trust and antitrust. Ethics, 96(2), 231--260.
[34]
Alon Jacovi, Ana Marasovi?, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial Intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 624--635
[35]
Steven Lockey, Nicole Gillespie, Daniel Holm, and Ida Asadi Someh. 2021. A review of trust in artificial intelligence: Challenges, vulnerabilities and future directions. Hawaii International Conference on System Sciences 2021 (HICSS-54). https://aisel.aisnet.org/hicss-54/os/trust/2
[36]
Jason D'Cruz. 2015. Trust, trustworthiness, and the moral consequence of consistency. Journal of the American Philosophical Association, 1, 3, 467--484. https://doi.org/10.1017/apa.2015.3
[37]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. 'Why should I trust you'?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135--44. KDD '16. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2939672.2939778
[38]
Nessrine Omrani, Giorgia Rivieccio, Ugo Fiore, Francesco Schiavone, and Sergio Garcia Agreda. 2022. To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts. Technological Forecasting and Social Change, 181, 121763. https://doi.org/10.1016/j.techfore.2022.121763
[39]
Organisation for Economic Co-operation and Development (OECD). 2021. Tools for trustworthy AI: A framework to compare implementation tools for trustworthy AI systems. Retrieved from https://www.oecd.org/science/tools-for-trustworthy-ai-008232ec-en.htm
[40]
European Commission High-Level Expert Group on AI. 2019. Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
[41]
Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to evaluate trust in AI-assisted decision making? A survey of empirical methodologies. Proceedings of the ACM on Human-Computer Interaction 5, no. CSCW2 (2021): 1--39.
[42]
Hyesun Choung, Prabu David, and Arun Ross. Trust in AI and its role in the acceptance of AI technologies. International Journal of Human--Computer Interaction 39, no. 9 (2023): 1727--1739.
[43]
John D. Lee and Katrina A. See. 2004. Trust in automation: designing for appropriate reliance. Human Factors. 2004 Spring; 46(1):50--80. https://doi.org/10.1518/hfes.46.1.50_30392. 15151155
[44]
Brian Stanton and Theodore Jensen. 2021. Trust and artificial intelligence [Preprint]. https://doi.org/10.6028/NIST.IR.8332-draft
[45]
Markus Langer, Cornelius J. König, Caroline Back, and Victoria Hemsing. 2022. Trust in artificial intelligence: Comparing trust processes between human and automated trustees in light of unfair bias. Journal of Business and Psychology, 38, 493--508. https://doi.org/10.1007/s10869-022-09829--9
[46]
Mark Ryan. 2020. In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26, 5, 2749--2767. https://doi.org/10.1007/s11948-020-00228-y
[47]
Lionel Nganyewou Tidjon, and Foutse Khomh. 2022. Never trust, always verify: A roadmap for trustworthy AI? ArXiv. Retrieved from https://doi.org/10.48550/arXiv.2206.11981
[48]
Organisation for Economic Co-operation and Development (OECD). 2019. Recommendation of the Council on Artificial Intelligence. Legal Instrument 0449. https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449
[49]
See: GlobalPolicy.AI. N.d. Retrieved from: https://globalpolicy.ai/en/key-focus-areas/
[50]
Kenneth R. Fleischmann and William A. Wallace. 2005. A covenant with transparency: Opening the black box of models. Communications of the ACM 48, 5 (May 2005), 93--97. https://doi-org.ezproxy.lib.utexas.edu/10.1145/1060710.1060715
[51]
Nikola Banovic, Zhuoran Yang, Aditya Ramesh, and Alice Liu. 2023. Being trustworthy is not enough: How untrustworthy artificial intelligence (AI) can deceive the end-users and gain their trust. Proceedings of the ACM on Human-Computer Interaction 7, no. CSCW1 (2023): 1--17.
[52]
Gregory Falco, Ben Shneiderman, Julia Badger, Ryan Carrier, Anton Dahbura, David Danks, Martin Eling, Alwyn Goodloe, Jerry Gupta, Christopher Hart, Marina Jirotka, Henric Johnson, Cara LaPointe, Ashley Llorens, Alan Mackworth, Carsten Maple, Sigurður Pálsson, Frank Pasquale, Alan Winfield, Zee Yeong. 2021. Governing AI safety through independent audits. Nature Machine Intelligence, 3, 566--571. 10.1038/s42256-021-00370--7.
[53]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 295--305. https://doi.org/10.1145/3351095
[54]
The Ethical AI Database (EAIDB). N.d. Retrieved from https://www.eaidb.org/
[55]
Victoria Clarke and Virginia Braun. 2017. Thematic analysis. The Journal of Positive Psychology, 12(3), 297--298. https://doi.org/10.1080/17439760.2016.1262613
[56]
Elham Tabassi. 2023. Artificial intelligence risk management framework (AI RMF 1.0), NIST Trustworthy and responsible AI, National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.100--1, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225
[57]
George A. Akerlof. 1970. The Market for 'Lemons': Quality Uncertainty and the Market Mechanism. The Quarterly Journal of Economics 84, 3: 488--500.
[58]
Steven E. Kaplan, Pamela B. Roush, and Linda Thorne. 2007. Andersen and the market for lemons in Audit Reports. Journal of Business Ethics 70, 363--373 (2007). https://doi.org/10.1007/s10551-006--9115--4
[59]
ForHumanity. https://forhumanity.center/
[60]
Rohan Light and Enrico Panai, 2022. The self-synchronisation of AI ethical principles. Digital Society 1, 24 (2022). https://doi.org/10.1007/s44206-022-00023--1
[61]
Responsible Artificial Intelligence Institute (RAII). Retrieved from https://www.responsible.ai/
[62]
Graeme Auld, Ashley Casovan, Amanda Clarke, and Benjamin Faveri. 2022. Governing AI through ethical standards: learning from the experiences of other private governance initiatives. Journal of European Public Policy, 29:11, 1822--1844.
[63]
ICO -- Layered approach Retrieved from https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/the-right-to-be-informed/what-methods-can-we-use-to-provide-privacy-information/#: :text=Impact%20Assessments%20(DPIA),What%20is%20a%20layered%20approach%3F,you%20use%20the%20personal%20data
[64]
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1
[65]
European Commission. 2017. Article 29 Data Protection Working Party. Guidelines on automated individual decision-making and profiling for the purposes of regulation 2016/679, 3 October 2017. https://ec.europa.eu/newsroom/article29/items/612053/en
[66]
Karoline Reinhardt. 2023. Trust and trustworthiness in AI ethics. AI Ethics 3, 735--744 (2023). https://doi.org/10.1007/s43681-022-00200--5

Cited By

View all
  • (2024)The Role of Algorithmic Audits and Other Soft Law Approaches in Informing Users' Calibrated Trust in Artificial Intelligence ToolsCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3682050(54-56)Online publication date: 11-Nov-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW2
CSCW
November 2024
5177 pages
EISSN:2573-0142
DOI:10.1145/3703902
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 November 2024
Published in PACMHCI Volume 8, Issue CSCW2

Check for updates

Author Tags

  1. ai auditing
  2. ai auditors
  3. ai users
  4. qualitative field research
  5. trust in ai-based systems

Qualifiers

  • Research-article

Funding Sources

  • Notre Dame-IBM Technology Ethics Lab

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)203
  • Downloads (Last 6 weeks)122
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)The Role of Algorithmic Audits and Other Soft Law Approaches in Informing Users' Calibrated Trust in Artificial Intelligence ToolsCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3682050(54-56)Online publication date: 11-Nov-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media