skip to main content
10.1145/3617694.3623223acmconferencesArticle/Chapter ViewAbstractPublication PageseaamoConference Proceedingsconference-collections
research-article
Open access

The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements

Published: 30 October 2023 Publication History

Abstract

Prior work has established the importance of integrating AI ethics topics into computer and data sciences curricula. We provide evidence suggesting that one of the critical objectives of AI Ethics education must be to raise awareness of AI harms. While there are various sources to learn about such harms, The AI Incident Database (AIID) is one of the few attempts at offering a relatively comprehensive database indexing prior instances of harms or near harms stemming from the deployment of AI technologies in the real world. This study assesses the effectiveness of AIID as an educational tool to raise awareness regarding the prevalence and severity of AI harms in socially high-stakes domains. We present findings obtained through a classroom study conducted at an R1 institution as part of a course focused on the societal and ethical considerations around AI and ML. Our qualitative findings characterize students’ initial perceptions of core topics in AI ethics and their desire to close the educational gap between their technical skills and their ability to think systematically about ethical and societal aspects of their work. We find that interacting with the database helps students better understand the magnitude and severity of AI harms and instills in them a sense of urgency around (a) designing functional and safe AI and (b) strengthening governance and accountability mechanisms. Finally, we compile students’ feedback about the tool and our class activity into actionable recommendations for the database development team and the broader community to improve awareness of AI harms in AI ethics education.

Supplemental Material

PDF File
Appendix
PDF File
Appendix

References

[1]
AIAAIC. 2016. Incident Number 6. AI Incident Database (2016). https://incidentdatabase.ai/cite/6
[2]
Noor Al-Sibai. [n. d.]. Those Horny Chatbots Are Apparently Now Sexually Harassing Users. The Byte ([n. d.]). https://futurism.com/the-byte/replika-chatbot-harassing-users
[3]
Anonymous. 2015. Incident Number 16. AI Incident Database (2015). https://incidentdatabase.ai/cite/16
[4]
Daniel Atherton. 2018. Incident Number 323. AI Incident Database (2018). https://incidentdatabase.ai/cite/323
[5]
Daniel Atherton. 2021. Incident Number 286. AI Incident Database (2021). https://incidentdatabase.ai/cite/286
[6]
Stephanie Ballard, Karen M Chappell, and Kristen Kennedy. 2019. Judgment call the game: Using value sensitive design and design fiction to surface ethical concerns related to technology. In Proceedings of the 2019 on Designing Interactive Systems Conference. 421–433.
[7]
Jo Bates, David Cameron, Alessandro Checco, Paul Clough, Frank Hopfgartner, Suvodeep Mazumdar, Laura Sbaffi, Peter Stordy, and Antonio de la Vega de León. 2020. Integrating FATE/critical data studies into data science curricula: where are we going and how do we get there?. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 425–435.
[8]
Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilović, 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development 63, 4/5 (2019), 4–1.
[9]
Sam Biddle. [n. d.]. The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques. https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/
[10]
Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming failures of imagination in AI infused system development and deployment. arXiv preprint arXiv:2011.13416 (2020).
[11]
Noelle Brown, Koriann South, and Eliane S Wiese. 2022. The Shortest Path to Ethics in AI: An Integrated Assignment Where Human Concerns Guide Technical Decisions. In Proceedings of the 2022 ACM Conference on International Computing Education Research-Volume 1. 344–355.
[12]
Richard Brown. [n. d.]. Integrating ethics into a computing curriculum: A case study of the Therac-25. ([n. d.]).
[13]
Beleicia B Bullock, Fernando L Nascimento, and Stacy A Doore. 2021. Computing ethics narratives: Teaching computing ethics and the impact of predictive algorithms. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education. 1020–1026.
[14]
Scott Allen Cambo and Darren Gergle. 2022. Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19.
[15]
Charles Pownall. 2019. AI and Algorithmic Incidents and Controversies (AIAAIC). https://www.aiaaic.org/home
[16]
Arthur C Clarke. 1962. Hazards of prophecy: The failure of imagination. Profiles of the Future 6, 36 (1962), 1.
[17]
Anne Colby and William M Sullivan. 2008. Ethics teaching in undergraduate engineering education. Journal of Engineering Education 97, 3 (2008), 327–338.
[18]
Neama Dadkhahnikoo. 2020. Incident Number 113. AI Incident Database (2020). https://incidentdatabase.ai/cite/113
[19]
David Dao. 2021. Awful AI. https://github.com/daviddao/awful-ai/releases/tag/v1.0.0
[20]
Jared Dunnmon, Bryce Goodman, Peter Kirechu, Carol Smith, and Alexandrea Van Deusen. 2021. Responsible AI guidelines in practice. Defense Innovation Unit.
[21]
Casey Fiesler, Natalie Garrett, and Nathan Beard. 2020. What do we teach when we teach tech ethics? A syllabi analysis. In Proceedings of the 51st ACM technical symposium on computer science education. 289–295.
[22]
Luciano Floridi and Josh Cowls. 2022. A unified framework of five principles for AI in society. Machine learning and the city: Applications in architecture and urban design (2022), 535–545.
[23]
Natalie Garrett, Nathan Beard, and Casey Fiesler. 2020. More Than "If Time Allows": The Role of Ethics in AI Education. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (New York, NY, USA) (AIES ’20). Association for Computing Machinery, New York, NY, USA, 272–278. https://doi.org/10.1145/3375627.3375868
[24]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (2021), 86–92.
[25]
Ira Globus-Harris, Michael Kearns, and Aaron Roth. 2022. An algorithmic framework for bias bounties. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1106–1124.
[26]
Google PAIR. 2019. People + AI Guidebook. pair.withgoogle.com/guidebook
[27]
Barbara J Grosz, David Gray Grant, Kate Vredenburgh, Jeff Behrends, Lily Hu, Alison Simmons, and Jim Waldo. 2019. Embedded EthiCS: integrating ethics across CS education. Commun. ACM 62, 8 (2019), 54–61.
[28]
Justin L Hess and Grant Fore. 2018. A systematic literature review of US engineering ethics interventions. Science and engineering ethics 24 (2018), 551–583.
[29]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3290605.3300830
[30]
Matthew K. Hong, Adam Fourney, Derek DeBellis, and Saleema Amershi. 2021. Planning for Natural Language Failures with the AI Playbook. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 386, 11 pages. https://doi.org/10.1145/3411764.3445735
[31]
Maurice Jakesch, Zana Buçinca, Saleema Amershi, and Alexandra Olteanu. 2022. How different groups prioritize ethical values for responsible AI. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 310–323.
[32]
Maria Kasinidou, Styliani Kleanthous, Pınar Barlas, and Jahna Otterbacher. 2021. I agree with the decision, but they didn’t deserve this: Future Developers’ Perception of Fairness in Algorithmic Decisions. In Proceedings of the 2021 acm conference on fairness, accountability, and transparency. 690–700.
[33]
Maria Kasinidou, Styliani Kleanthous, Kalia Orphanou, and Jahna Otterbacher. 2021. Educating Computer Science Students about Algorithmic Fairness, Accountability, Transparency and Ethics. In Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1(ITiCSE ’21). Association for Computing Machinery, New York, NY, USA, 484–490. https://doi.org/10.1145/3430665.3456311
[34]
Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. 2022. Bug Bounties For Algorithmic Harms. Lessons from Cybersecurity Vulnerability Disclosure for Algorithmic Harms Discovery, Disclosure, and Redress. Algorithmic Justice League, Washington, DC (2022).
[35]
Khoa Lam. 2018. Incident Number 320. AI Incident Database (2018). https://incidentdatabase.ai/cite/320
[36]
Khoa Lam. 2022. Incident Number 221. AI Incident Database (2022). https://incidentdatabase.ai/cite/221
[37]
Benjamin Laufer, Sameer Jain, A Feder Cooper, Jon Kleinberg, and Hoda Heidari. 2022. Four years of FAccT: A reflexive, mixed-methods analysis of research contributions, shortcomings, and future prospects. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 401–426.
[38]
Nancy G Leveson and Clark S Turner. 1993. An investigation of the Therac-25 accidents. Computer 26, 7 (1993), 18–41.
[39]
Ana Lucic, Maurits Bleeker, Sami Jullien, Samarth Bhargav, and Maarten de Rijke. 2022. Reproducibility as a Mechanism for Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 12792–12800.
[40]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4765–4774. http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
[41]
Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[42]
Nikolas Martelaro and Wendy Ju. 2020. What could go wrong? Exploring the downsides of autonomous vehicles. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 99–101.
[43]
C Dianne Martin, Chuck Huff, Donald Gotterbarn, and Keith Miller. 1996. Implementing a tenth strand in the CS curriculum. Commun. ACM 39, 12 (1996), 75–84.
[44]
Afra Mashhadi, Annuska Zolyomi, and Jay Quedado. 2022. A Case Study of Integrating Fairness Visualization Tools in Machine Learning Education. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–7.
[45]
Matt McFarland. [n. d.]. Tesla ‘full self-driving’ triggered an eight-car crash, a driver tells police. CNN ([n. d.]). https://www.cnn.com/2022/12/21/business/tesla-fsd-8-car-crash/index.html
[46]
Sean McGregor. 2021. Preventing repeated real world AI failures by cataloging incidents: The AI incident database. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 15458–15463.
[47]
Sean McGregor, Kevin Paeth, and Khoa Lam. 2022. Indexing AI Risks with Incidents, Issues, and Variants. arXiv preprint arXiv:2211.10384 (2022).
[48]
Microsoft Corporation. 2022. Microsoft Responsible AI Standard, v2. Microsoft Corporation.
[49]
Jessica Morley, Luciano Floridi, Libby Kinsey, and Anat Elhalal. 2020. From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and engineering ethics 26, 4 (2020), 2141–2168.
[50]
Norman R Nielsen. 1972. Social responsibility and computer education. ACM SIGCSE Bulletin 4, 1 (1972), 90–96.
[51]
NTSB. [n. d.]. NTSB Aviation Accident Database & Synopses. https://www.ntsb.gov/Pages/AviationQuery.aspx
[52]
Kate Perkins. 2020. Incident Number 106. AI Incident Database (2020). https://incidentdatabase.ai/cite/106
[53]
Emma Pierson. 2018. Demographics and discussion influence views on algorithmic fairness. arXiv:1712.09124 (Mar 2018). http://arxiv.org/abs/1712.09124 arXiv:1712.09124 [cs].
[54]
Inioluwa Deborah Raji, I Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. 2022. The fallacy of AI functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 959–972.
[55]
Rob Reich, Mehran Sahami, Jeremy M Weinstein, and Hilary Cohen. 2020. Teaching computer ethics: A deeply multidisciplinary approach. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 296–302.
[56]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[57]
DUNCAN RILEY. [n. d.]. Tesla Autopilot flees police in Germany after driver fell asleep. Silicon Angle ([n. d.]). https://siliconangle.com/2023/01/01/telsa-autopilot-flees-police-germany-driver-fell-asleep/
[58]
Jeffrey Saltz, Michael Skirpan, Casey Fiesler, Micha Gorelick, Tom Yeh, Robert Heckman, Neil Dewar, and Nathan Beard. 2019. Integrating ethics within machine learning courses. ACM Transactions on Computing Education (TOCE) 19, 4 (2019), 1–26.
[59]
Jeffrey S Saltz, Neil I Dewar, and Robert Heckman. 2018. Key concepts for a data science ethics curriculum. In Proceedings of the 49th ACM technical symposium on computer science education. 952–957.
[60]
Ben Rydal Shapiro, Amanda Meng, Cody O’Donnell, Charlotte Lou, Edwin Zhao, Bianca Dankwa, and Andrew Hostetler. 2020. Re-Shape: A method to teach data ethics for data science education. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–13.
[61]
Hong Shen, Wesley H. Deng, Aditi Chattopadhyay, Zhiwei Steven Wu, Xu Wang, and Haiyi Zhu. 2021. Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 850–861. https://doi.org/10.1145/3442188.3445971
[62]
Michael Skirpan, Nathan Beard, Srinjita Bhaduri, Casey Fiesler, and Tom Yeh. 2018. Ethics education in context: A case study of novel ethics activities for the CS classroom. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education. 940–945.
[63]
Jessie J Smith, Saleema Amershi, Solon Barocas, Hanna Wallach, and Jennifer Wortman Vaughan. 2022. Real ml: Recognizing, exploring, and articulating limitations of machine learning research. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 587–597.
[64]
Tim Starks and Aaron Schaffer. [n. d.]. Yes, ChatGPT can write malicious code — but not well. The Washington Post ([n. d.]). https://www.washingtonpost.com/politics/2023/01/26/yes-chatgpt-can-write-malware-code-not-well/
[65]
Ioannis Stavrakakis, Damian Gordon, Brendan Tierney, Anna Becevel, Emma Murphy, Gordana Dodig-Crnkovic, Radu Dobrin, Viola Schiaffonati, Cristina Pereira, Svetlana Tikhonenko, 2021. The teaching of computer ethics on computer science and related degree programmes. a European survey. International Journal of Ethics Education (2021), 1–29.
[66]
ROB STUMPF. [n. d.]. SF Firefighters Smash Cruise Self-Driving Taxi Window To Stop It From Driving Over Hose. The Drive ([n. d.]). https://www.thedrive.com/news/sf-firefighters-smash-cruise-self-driving-taxi-window-to-stop-it-from-driving-over-hose
[67]
Edward Tenner. 1997. Why things bite back: Technology and the revenge of unintended consequences. Vintage.
[68]
Mengyi Wei and Zhixuan Zhou. 2022. AI ethics issues in real world: Evidence from AI incident database. arXiv preprint arXiv:2206.07635 (2022).
[69]
Chloe Xiang. [n. d.]. Startup Uses AI Chatbot to Provide Mental Health Counseling and Then Realizes It ’Feels Weird’. Vice ([n. d.]). https://www.vice.com/en/article/4ax9yw/startup-uses-ai-chatbot-to-provide-mental-health-counseling-and-then-realizes-it-feels-weird
[70]
Roman Yampolskiy. 2015. Incident Number 1. AI Incident Database (2015). https://incidentdatabase.ai/cite/1

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
October 2023
498 pages
ISBN:9798400703812
DOI:10.1145/3617694
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 October 2023

Check for updates

Author Tags

  1. AI harms
  2. AI safety
  3. classroom exploration
  4. educational tool
  5. incident database

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

EAAMO '23
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)997
  • Downloads (Last 6 weeks)136
Reflects downloads up to 09 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media