skip to main content
10.1145/3630106.3658992acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Participation in the age of foundation models

Published: 05 June 2024 Publication History

Abstract

Growing interest and investment in the capabilities of foundation models has positioned such systems to impact a wide array of services, from banking to healthcare. Alongside these opportunities is the risk that these systems reify existing power imbalances and cause disproportionate harm to historically marginalized groups. The larger scale and domain-agnostic manner in which these models operate further heightens the stakes: any errors or harms are liable to reoccur across use cases. In AI & ML more broadly, participatory approaches hold promise to lend agency and decision-making power to marginalized stakeholders, leading to systems that better benefit justice through equitable and distributed governance. But existing approaches in participatory AI/ML are typically grounded in a specific application and set of relevant stakeholders, and it is not straightforward how to apply these lessons to the context of foundation models. Our paper aims to fill this gap.
First, we examine existing attempts at incorporating participation into foundation models. We highlight the tension between participation and scale, demonstrating that it is intractable for impacted communities to meaningfully shape a foundation model that is intended to be universally applicable. In response, we develop a blueprint for participatory foundation models that identifies more local, application-oriented opportunities for meaningful participation. In addition to the “foundation” layer, our framework proposes the “subfloor” layer, in which stakeholders develop shared technical infrastructure, norms and governance for a grounded domain such as clinical care, journalism, or finance, and the “surface” (or application) layer, in which affected communities shape the use of a foundation model for a specific downstream task. The intermediate “subfloor” layer scopes the range of potential harms to consider, and affords communities more concrete avenues for deliberation and intervention. At the same time, it avoids duplicative effort by scaling input across relevant use cases. Through three case studies in clinical care, financial services, and journalism, we illustrate how this multi-layer model can create more meaningful opportunities for participation than solely intervening at the foundation layer.

References

[1]
Alex Ahmed. 2020. We Will Not Be Pacified Through Participation. Tech Otherwise 15, 10 (2020), 2020.
[2]
Alexandra Alter and Elizabeth A. Harris. 2023. Franzen, Grisham and Other Prominent Authors Sue OpenAI. https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-copyright.html
[3]
Anthropic. 2023. Collective Constitutional AI: Aligning a Language Model with Public Input. Technical Report. https://www.anthropic.com/index/collective-constitutional-ai-aligning-a-language-model-with-public-input
[4]
Maria Antoniak, Aakanksha Naik, Carla S Alvarado, Lucy Lu Wang, and Irene Y Chen. 2023. Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health. arXiv preprint arXiv:2312.11803 (2023).
[5]
Sherry R Arnstein. 1969. A ladder of citizen participation. Journal of the American Institute of planners 35, 4 (1969), 216–224.
[6]
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. A General Language Assistant as a Laboratory for Alignment. arxiv:2112.00861 [cs.CL]
[7]
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. arxiv:2204.05862 [cs.CL]
[8]
Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, 2022. Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems 35 (2022), 38176–38189.
[9]
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623.
[10]
Ruha Benjamin. 2019. Race after technology. Polity.
[11]
Stevie Bergman, Nahema Marchal, John Mellor, Shakir Mohamed, Iason Gabriel, and William Isaac. 2024. STELA: a community-centred approach to norm elicitation for AI alignment. Scientific Reports 14, 1 (2024), 6616.
[12]
Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. Power to the people? opportunities and challenges for participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization (2022), 1–8.
[13]
Abeba Birhane, Vinay Prabhu, Sang Han, Vishnu Naresh Boddeti, and Alexandra Sasha Luccioni. 2023. Into the LAIONs Den: Investigating Hate in Multimodal Datasets. arXiv preprint arXiv:2311.03449 (2023).
[14]
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
[15]
Rishi Bommasani, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, and Percy Liang. 2023. The foundation model transparency index. arXiv preprint arXiv:2310.12941 (2023).
[16]
Rishi Bommasani, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, and Percy Liang. 2023. The foundation model transparency index. arXiv preprint arXiv:2310.12941 (2023).
[17]
Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, and Florian Tramèr. 2022. What does it mean for a language model to preserve privacy?. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2280–2292.
[18]
Erik Brynjolfsson, Danielle Li, and Lindsey R Raymond. 2023. Generative AI at work. Technical Report. National Bureau of Economic Research.
[19]
Zana Buçinca, Phoebe Lin, Krzysztof Z Gajos, and Elena L Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable ai systems. In Proceedings of the 25th international conference on intelligent user interfaces. 454–464.
[20]
Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2023. Deep reinforcement learning from human preferences. arxiv:1706.03741 [stat.ML]
[21]
Jan Clusmann, Fiona R Kolbinger, Hannah Sophie Muti, Zunamys I Carrero, Jan-Niklas Eckardt, Narmin Ghaffari Laleh, Chiara Maria Lavinia Löffler, Sophie-Caroline Schwarzkopf, Michaela Unger, Gregory P Veldhuizen, 2023. The future landscape of large language models in medicine. Communications Medicine 3, 1 (2023), 141.
[22]
Donavyn Cofey. 2021. Maori are trying to save their language from Big Tech. Wired (2021).
[23]
Combahee River Collective. 1977. The Combahee river collective statement. (1977).
[24]
Patricia Hill Collins. 2019. Intersectionality as critical social theory. Duke University Press.
[25]
Bill Cooke and Uma Kothari. 2001. Participation: The new tyranny?Zed books.
[26]
Eric Corbett, Emily Denton, and Sheena Erete. 2023. Power and Public Participation in AI. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (, Boston, MA, USA, ) (EAAMO ’23). Association for Computing Machinery, New York, NY, USA, Article 8, 13 pages. https://doi.org/10.1145/3617694.3623228
[27]
Sasha Costanza-Chock. 2020. Design justice: Community-led practices to build the worlds we need. The MIT Press.
[28]
Kimberlé Williams Crenshaw. 2013. Mapping the margins: Intersectionality, identity politics, and violence against women of color. In The public nature of private violence. Routledge, 93–118.
[29]
Nicholas Deas, Jessi Grieser, Shana Kleiner, Desmond Patton, Elsbeth Turcan, and Kathleen McKeown. 2023. Evaluation of African American Language Bias in Natural Language Generation. arXiv preprint arXiv:2305.14291 (2023).
[30]
Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2023. The participatory turn in ai design: Theoretical foundations and the current state of practice. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. 1–23.
[31]
Emily Denton, Alex Hanna, Razvan Amironesei, Andrew Smart, Hilary Nicole, and Morgan Klaus Scheuerman. 2020. Bringing the people back in: Contesting benchmark machine learning datasets. arXiv preprint arXiv:2007.07399 (2020).
[32]
Catherine D’ignazio and Lauren F Klein. 2023. Data feminism. MIT press.
[33]
Finale Doshi-Velez and Been Kim. 2018. Considerations for evaluation and generalization in interpretable machine learning. Explainable and interpretable models in computer vision and machine learning (2018), 3–17.
[34]
Paul Dourish, Christopher Lawrence, Tuck Wah Leong, and Greg Wadley. 2020. On being iterated: The affective demands of design participation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–11.
[35]
Joe Edelman, Oliver Klingefjord, Ivan Vendrov, and Ellie Hain. 2023. Democratic Fine-tuning. Meaning Alignment Institute (2023).
[36]
Ron Eglash, Kwame P Robinson, Audrey Bennett, Lionel Robert, and Mathew Garvin. 2024. Computational reparations as generative justice: Decolonial transitions to unalienated circular value flow. Big Data & Society 11, 1 (2024), 20539517231221732.
[37]
Tyna Eloundou and Teddy Lee. 2024. Democratic inputs to AI grant program: lessons learned and implementation plans. https://openai.com/blog/democratic-inputs-to-ai-grant-program-update
[38]
Andy Extance. 2023. ChatGPT has entered the classroom: how LLMs could transform education. Nature 623, 7987 (2023), 474–477.
[39]
Michael Feffer, Michael Skirpan, Zachary Lipton, and Hoda Heidari. 2023. From preference elicitation to participatory ml: A critical survey & guidelines for future research. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 38–48.
[40]
Virginia Felkner, Ho-Chun Herbert Chang, Eugene Jang, and Jonathan May. 2023. WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 9126–9140.
[41]
Sara Fish, Paul Gölz, David C Parkes, Ariel D Procaccia, Gili Rusak, Itai Shapira, and Manuel Wüthrich. 2023. Generative Social Choice. arXiv preprint arXiv:2309.01291 (2023).
[42]
Archon Fung. 2006. Varieties of participation in complex governance. Public administration review 66 (2006), 66–75.
[43]
Vinitha Gadiraju, Shaun Kane, Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton, and Robin Brewer. 2023. " I wouldn’t say offensive but...": Disability-Centered Perspectives on Large Language Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 205–216.
[44]
Atul Gawande. 2018. Why doctors hate their computers. The New Yorker 12 (2018).
[45]
Jerald Greenberg and Robert Folger. 1983. Procedural justice, participation, and the fair process effect in groups and organizations. In Basic group processes. Springer, 235–256.
[46]
Davydd J Greenwood and Morten Levin. 2006. Introduction to action research: Social research for social change. SAGE publications.
[47]
Judith Gregory. 2003. Scandinavian approaches to participatory design. International Journal of Engineering Education 19, 1 (2003), 62–74.
[48]
Lara Groves, Aidan Peppin, Andrew Strait, and Jenny Brennan. 2023. Going public: the role of public participation approaches in commercial AI labs. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1162–1173.
[49]
Michael M. Grynbaum and Ryan Mac. 2023. The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work. https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html
[50]
Karen Hao. 2022. A new vision of artificial intelligence for the people. MIT Technology Review (2022).
[51]
Christina Harrington, Sheena Erete, and Anne Marie Piper. 2019. Deconstructing community-based collaborative design: Towards more equitable participatory design engagements. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–25.
[52]
IAP2. 2024. IAP2 Spectrum of Public Participation. https://cdn.ymaws.com/www.iap2.org/resource/resmgr/pillars/Spectrum_8.5x11_Print.pdf
[53]
Nanna Inie, Jeanette Falk, and Steve Tanimoto. 2023. Designing Participatory AI: Creative Professionals’ Worries and Expectations about Generative AI. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–8.
[54]
Ada Lovelace Institute. 2021. Participatory data stewardship. Ada Lovelace Institute (2021). https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/
[55]
Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Bintz, Daniella Raz, and PM Krafft. 2020. Toward situated interventions for algorithmic equity: lessons from the field. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 45–55.
[56]
Hannah Rose Kirk, Andrew M Bean, Bertie Vidgen, Paul Röttger, and Scott A Hale. 2023. The past, present and better future of feedback learning in large language models for subjective human preferences and values. arXiv preprint arXiv:2310.07629 (2023).
[57]
Daniel N Kluttz, Nitin Kohli, and Deirdre K Mulligan. 2022. Shaping our tools: Contestability as a means to promote responsible algorithmic decision making in the professions. In Ethics of Data and Analytics. Auerbach Publications, 420–428.
[58]
Andrew Konya, Lisa Schirch, Colin Irwin, and Aviv Ovadya. 2023. Democratic Policy Development using Collective Dialogues and AI. arXiv preprint arXiv:2311.02242 (2023).
[59]
Alex Krasodomski-Jones, Carl Miller, Flynn Devine, Jia-Wei (Peter) Cui, and Shu Yang Lin. 2023. vTaiwan and Chatham House: Bridging the Recursive Public. (2023).
[60]
Julia Kreutzer, Stefan Riezler, and Carolin Lawrence. 2021. Offline Reinforcement Learning from Human Feedback in Real-World Sequence-to-Sequence Tasks. In Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021), Zornitsa Kozareva, Sujith Ravi, Andreas Vlachos, Priyanka Agrawal, and André Martins (Eds.). Association for Computational Linguistics, Online, 37–43. https://doi.org/10.18653/v1/2021.spnlp-1.4
[61]
Bogdan Kulynych, David Madras, Smitha Milli, Inioluwa Deborah Raji, Angela Zhou, and Richard Zemel. 2020. Participatory approaches to machine learning. In International Conference on Machine Learning Workshop, Vol. 7.
[62]
Heather Landi. 2023. Epic is going all in on generative AI in healthcare. Here’s why health systems are eager to test-drive it. (may 2023). https://www.fiercehealthcare.com/health-tech/epic-moves-forward-bring-generative-ai-healthcare-heres-why-handful-health-systems-are
[63]
Katherine Lee, A Feder Cooper, and James Grimmelmann. 2023. Talkin”Bout AI Generation: Copyright and the Generative-AI Supply Chain. arXiv preprint arXiv:2309.08133 (2023).
[64]
Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, 2019. WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–35.
[65]
David Lepeska. [n. d.]. AI-DRIVEN FRAUD IS BANKING’S NEXT GREAT RISK. Bank Director. https://www.bankdirector.com/article/ai-driven-fraud-is-bankings-next-great-risk/.
[66]
Alexandra Sasha Luccioni, Yacine Jernite, and Emma Strubell. 2023. Power Hungry Processing: Watts Driving the Cost of AI Deployment?arXiv preprint arXiv:2311.16863 (2023).
[67]
Mastercard. [n. d.]. Detect: Decision Intelligence. Online. https://www.mastercard.com/globalrisk/en/resources/all-resources/detect.html.
[68]
Alexandra Mateescu and Madeleine Elish. 2019. AI in context: the labor of integrating new technologies. Data & Society Research Institute (2019).
[69]
Alice McIntyre. 2007. Participatory action research. Sage publications.
[70]
Gemma B. Mendoza, Gilian Uy, Don Kevin Hapal, Ogoy San Juan, and Maria Ressa. 2023. Making AI Transparent and Accountable by Rappler. (2023).
[71]
Milagros Miceli and Julian Posada. 2022. The Data-Production Dispositif. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 460 (nov 2022), 37 pages. https://doi.org/10.1145/3555561
[72]
Milagros Miceli, Martin Schuessler, and Tianling Yang. 2020. Between subjectivity and imposition: Power dynamics in data annotation for computer vision. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–25.
[73]
Sewon Min, Suchin Gururangan, Eric Wallace, Hannaneh Hajishirzi, Noah A Smith, and Luke Zettlemoyer. 2023. Silo language models: Isolating legal risk in a nonparametric datastore. arXiv preprint arXiv:2308.04430 (2023).
[74]
Meredith Minkler and Nina Wallerstein. 2011. Community-based participatory research for health: From process to outcomes. John Wiley & Sons.
[75]
Michael J Muller and Sarah Kuhn. 1993. Participatory design. Commun. ACM 36, 6 (1993), 24–28.
[76]
Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A Feder Cooper, Daphne Ippolito, Christopher A Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. 2023. Scalable extraction of training data from (production) language models. arXiv preprint arXiv:2311.17035 (2023).
[77]
Michael Nayebare, Ron Eglash, Ussen Kimanuka, Rehema Baguma, J. Mounsey, and C. Maina. 2023. Interim Report for Ubuntu-AI: A Bottom-up Approach to More Democratic and Equitable Training and Outcomes for Machine Learning. OpenAI Foundation Democratic Inputs for AI.
[78]
Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp Öktem, Adewale Akinfaderin, and Abdallah Bashir. 2020. Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Association for Computational Linguistics, Online, 2144–2160. https://doi.org/10.18653/v1/2020.findings-emnlp.195
[79]
Safiya Umoja Noble. 2018. Algorithms of oppression. In Algorithms of oppression. New York university press.
[80]
OpenAI. 2022. DALL·E 2 Preview - Risks and Limitations. https://github.com/openai/dalle-2-preview/blob/main/system-card.md
[81]
OpenAI. 2023. GPT-4 System Card. https://cdn.openai.com/papers/gpt-4-system-card.pdf
[82]
OpenAI. 2023. How should AI systems behave, and who should decide?https://openai.com/blog/how-should-ai-systems-behave
[83]
OpenAI. 2023. OpenAI Red Teaming Network. https://openai.com/blog/red-teaming-network
[84]
Will Oremus. 2023. Elon Musk promised an anti-‘woke’ chatbot. It’s not going as planned.https://www.washingtonpost.com/technology/2023/12/23/grok-ai-elon-musk-x-woke-bias/
[85]
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. arxiv:2203.02155 [cs.CL]
[86]
Victoria Palacin, Matti Nelimarkka, Pedro Reynolds-Cuéllar, and Christoph Becker. 2020. The design of pseudo-participation. In Proceedings of the 16th Participatory Design Conference 2020-Participation (s) Otherwise-Volume 2. 40–44.
[87]
Dev Patnaik and Robert Becker. 1999. Needfinding: the why and how of uncovering people’s needs. Design Management Journal (Former Series) 10, 2 (1999), 37–43.
[88]
Michael Quinn Patton. 2014. Qualitative research & evaluation methods: Integrating theory and practice. Sage publications.
[89]
Billy Perrigo. 2023. Exclusive: OpenAI Used Kenyan Workers on Less Than 2 Dollars Per Hour to Make ChatGPT Less Toxic. https://time.com/6247678/openai-chatgpt-kenya-workers/
[90]
Stephen R. Pfohl, Heather Cole-Lewis, Rory Sayres, Darlene Neal, Mercy Asiedu, Awa Dieng, Nenad Tomasev, Qazi Mamunur Rashid, Shekoofeh Azizi, Negar Rostamzadeh, Liam G. McCoy, Leo Anthony Celi, Yun Liu, Mike Schaekermann, Alanna Walton, Alicia Parrish, Chirag Nagpal, Preeti Singh, Akeiylah Dewitt, Philip Mansfield, Sushant Prakash, Katherine Heller, Alan Karthikesalingam, Christopher Semturs, Joelle Barral, Greg Corrado, Yossi Matias, Jamila Smith-Loud, Ivor Horn, and Karan Singhal. 2024. A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models. arxiv:2403.12025 [cs.CY]
[91]
Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, 2023. Use large language models to promote equity. arXiv preprint arXiv:2312.14804 (2023).
[92]
Organizers Of Queerinai, Anaelia Ovalle, Arjun Subramonian, Ashwin Singh, Claas Voelcker, Danica J. Sutherland, Davide Locatelli, Eva Breznik, Filip Klubicka, Hang Yuan, Hetvi J, Huan Zhang, Jaidev Shriram, Kruno Lehman, Luca Soldaini, Maarten Sap, Marc Peter Deisenroth, Maria Leonor Pacheco, Maria Ryskina, Martin Mundt, Milind Agarwal, Nyx Mclean, Pan Xu, A Pranav, Raj Korpan, Ruchira Ray, Sarah Mathew, Sarthak Arora, St John, Tanvi Anand, Vishakha Agrawal, William Agnew, Yanan Long, Zijie J. Wang, Zeerak Talat, Avijit Ghosh, Nathaniel Dennler, Michael Noseworthy, Sharvani Jha, Emi Baylor, Aditya Joshi, Natalia Y. Bilenko, Andrew Mcnamara, Raphael Gontijo-Lopes, Alex Markham, Evyn Dong, Jackie Kay, Manu Saraswat, Nikhil Vytla, and Luke Stark. 2023. Queer In AI: A Case Study in Community-Led Participatory AI. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1882–1895. https://doi.org/10.1145/3593013.3594134
[93]
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290 (2023).
[94]
Ethan Shaotran, Ido Pesok, and Sam Jones. 2023. Aligned: Platform-based Alignment. Energize AI (2023).
[95]
Mrinank Sharma, Meg Tong, Tomasz Korbak, David Duvenaud, Amanda Askell, Samuel R. Bowman, Newton Cheng, Esin Durmus, Zac Hatfield-Dodds, Scott R. Johnston, Shauna Kravec, Timothy Maxwell, Sam McCandlish, Kamal Ndousse, Oliver Rausch, Nicholas Schiefer, Da Yan, Miranda Zhang, and Ethan Perez. 2023. Towards Understanding Sycophancy in Language Models. arxiv:2310.13548 [cs.CL]
[96]
Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2022. Participation Is Not a Design Fix for Machine Learning. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (, Arlington, VA, USA, ) (EAAMO ’22). Association for Computing Machinery, New York, NY, USA, Article 1, 6 pages. https://doi.org/10.1145/3551624.3555285
[97]
Zachary Small. 2023. Sarah Silverman Sues OpenAI and Meta Over Copyright Infringement. https://www.nytimes.com/2023/07/10/arts/sarah-silverman-lawsuit-openai-meta.html
[98]
Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. 2022. Learning to summarize from human feedback. arxiv:2009.01325 [cs.CL]
[99]
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243 (2019).
[100]
Harini Suresh, Rajiv Movva, Amelia Lee Dogan, Rahul Bhargava, Isadora Cruxên, Ángeles Martinez Cuba, Guilia Taurino, Wonyoung So, and Catherine D’Ignazio. 2022. Towards intersectional feminist and participatory ML: A case study in supporting Feminicide Counterdata Collection. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 667–678.
[101]
Jorim Theuns, Evelien Nieuwenburg, Pepijn Verburg, Lei Nelissen, Brett Hennig, Rich Rippin, Ran Haase, Aldo de Moor, CeesJan Mol, Naomi Esther, Rolf Kleef, and Bram Delisse. 2023. Deliberation at Scale: Socially democratic inputs to AI. Commonground (2023).
[102]
Emily Tseng, Rosanna Bellini, Yeuk-Yu Lee, Alana Ramjit, Thomas Ristenpart, and Nicola Dell. 2024. Data Stewardship in Clinical Computer Security: Balancing Benefit and Burden in Participatory Systems. Proc. ACM Hum.-Comput. Interact. 8, CSCW1, Article 39 (apr 2024), 29 pages. https://doi.org/10.1145/3637316
[103]
Visa. [n. d.]. Visa Account Attack Intelligence. Online. https://km.visamiddleeast.com/content/dam/VCOM/global/run-your-business/documents/visa-account-attack-intelligence-final.pdf.
[104]
Junjie Wang, Yuchao Huang, Chunyang Chen, Zhe Liu, Song Wang, and Qing Wang. 2023. Software testing with large language model: Survey, landscape, and vision. arXiv preprint arXiv:2307.07221 (2023).
[105]
Yang Wang, Yun Huang, Tanusree Sharma, Dawn Song, Sunny Liu, and Jeff Hancock. 2023. Inclusive.AI: Engaging Underserved Populations in Democratic Decision-Making on AI. (2023).
[106]
David Gray Widder, Sarah West, and Meredith Whittaker. 2023. Open (for business): Big tech, concentrated power, and the political economy of open AI. Concentrated Power, and the Political Economy of Open AI (August 17, 2023) (2023).
[107]
Christine T Wolf, Haiyi Zhu, Julia Bullard, Min Kyung Lee, and Jed R Brubaker. 2018. The changing contours of" participation" in data-driven, algorithmic ecosystems: Challenges, tactics, and an agenda. In Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing. 377–384.
[108]
Chloe Xiang. 2023. Man dies by suicide after talking with AI chatbot, widow says. Motherboard by Vice News (2023). https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
[109]
Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, and Xiu Li. 2023. Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model. arxiv:2311.13231 [cs.LG]
[110]
Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–13.
[111]
Travis Zack, Eric Lehman, Mirac Suzgun, Jorge A Rodriguez, Leo Anthony Celi, Judy Gichoya, Dan Jurafsky, Peter Szolovits, David W Bates, Raja-Elie E Abdulnour, 2024. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. The Lancet Digital Health 6, 1 (2024), e12–e22.
[112]
Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2020. Fine-Tuning Language Models from Human Preferences. arxiv:1909.08593 [cs.CL]
[113]
Jonathan Zong and J Nathan Matias. 2024. Data Refusal From Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design. ACM Journal on Responsible Computing 1, 1 (2024), 1–23.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
June 2024
2580 pages
ISBN:9798400704505
DOI:10.1145/3630106
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2024

Check for updates

Author Tags

  1. Foundation models
  2. communities
  3. governance
  4. public participation
  5. stakeholders

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 669
    Total Downloads
  • Downloads (Last 12 months)669
  • Downloads (Last 6 weeks)298
Reflects downloads up to 06 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media