skip to main content
10.1145/3491101.3503568acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
extended-abstract

A Case Study of Integrating Fairness Visualization Tools in Machine Learning Education

Published: 28 April 2022 Publication History

Abstract

As demonstrated by media attention and research, Artificial Intelligence systems are not adequately addressing issues of fairness and bias, and more education on these topics is needed in industry and higher education. Currently, computer science courses that cover AI fairness and bias focus on statistical analysis or, on the other hand, attempt to bring in philosophical perspectives that lack actionable takeaways for students. Based on long-standing pedagogical research demonstrating the importance of using tools and visualizations to reinforce student learning, this case study reports on the impacts of using publicly-available visualization tools used in HCI practice as a resource for students examining algorithmic fairness concepts. Through qualitative review and observations of four focus groups, we examined six open-source fairness tools that enable students to visualize, quantify and explore algorithmic biases. The findings of this study provide insights into the benefits, challenges, and opportunities of integrating fairness tools as part of machine learning education.

Supplementary Material

MP4 File (3491101.3503568-video-preview.mp4)
Video Preview

References

[1]
[1] 2021. https://www.kaggle.com/c/boston-housing
[2]
[2] 2021. https://docs.responsibly.ai/notebooks/demo-fico-analysis.html
[3]
[3] 2021. https://github.com/propublica/compas-analysis/
[4]
2021. Coded Bias.
[5]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. In International Conference on Machine Learning. PMLR, 60–69.
[6]
Alekh Agarwal, Miroslav Dudik, and Zhiwei Steven Wu. 2019. Fair regression: Quantitative definitions and reduction-based algorithms. In International Conference on Machine Learning. PMLR, 120–129.
[7]
Saleema Amershi, Giuseppe Carenini, Cristina Conati, Alan K Mackworth, and David Poole. 2008. Pedagogy and usability in interactive algorithm visualizations: Designing and evaluating CIspace. Interacting with Computers 20, 1 (2008), 64–96.
[8]
Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, and Przemyslaw Biecek. 2020. dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python. arXiv preprint arXiv:2012.14406(2020).
[9]
Niels Bantilan. 2017. scikit-ml. https://themis-ml.readthedocs.io/en/latest/
[10]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning. fairmlbook.org. http://www.fairmlbook.org.
[11]
Benjamin S Baumer, Randi L Garcia, Albert Y Kim, Katherine M Kinnaird, and Miles Q Ott. 2020. Integrating data science ethics into an undergraduate major. arXiv preprint arXiv:2001.07649(2020).
[12]
Przemysław Biecek. 2018. DALEX: explainers for complex predictive models in R. The Journal of Machine Learning Research 19, 1 (2018), 3245–3249.
[13]
Huili Chen, Cheng Fu, Jishen Zhao, and Farinaz Koushanfar. 2019. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks. In IJCAI. 4658–4664.
[14]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153–163.
[15]
Alexandra Chouldechova and Aaron Roth. 2020. A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63, 5 (2020), 82–89.
[16]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining. 797–806.
[17]
Kate Crawford. 2017. “The Trouble with Bias”.
[18]
Michael Davis. 2006. Integrating ethics into technical courses: Micro-insertion. Science and Engineering Ethics 12, 4 (2006), 717–730.
[19]
William Dieterich, Christina Mendoza, and Tim Brennan. 2016. COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Northpoint Inc 7, 7.4 (2016), 1.
[20]
Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
[21]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214–226.
[22]
Facebook. 2020. FairFlow. https://www.facebook.com/FairFlowTech/
[23]
Barbara J Grosz, David Gray Grant, Kate Vredenburgh, Jeff Behrends, Lily Hu, Alison Simmons, and Jim Waldo. 2019. Embedded EthiCS: integrating ethics across CS education. Commun. ACM 62, 8 (2019), 54–61.
[24]
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised learning. arXiv preprint arXiv:1610.02413(2016).
[25]
Shlomi Hod. 2018–. Responsibly: Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems. http://docs.responsibly.ai/
[26]
Anna Lauren Hoffmann and Katherine Alejandra Cross. 2021. Teaching Data Ethics: Foundations and Possibilities from Engineering and Computer Science Ethics Education. (2021).
[27]
Michael Kearns and Aaron Roth. 2019. The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.
[28]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807(2016).
[29]
Ari Korhonen and Lauri Malmi. 2002. Matrix: concept animation and algorithm simulation system. In Proceedings of the Working Conference on Advanced Visual Interfaces. 109–114.
[30]
Michelle Seng Ah Lee and Jat Singh. 2021. The landscape and gaps in open source fairness toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM New York, NY, USA, 1–13.
[31]
A Markham. 2016. OKCupid data release fiasco: It’s time to rethink ethics education. Points: Data & Society(2016).
[32]
Jacob Metcalf, Kate Crawford, and Emily F Keller. 2015. Pedagogical approaches to data ethics. Council for Big Data, Ethics, and Society(2015).
[33]
Thomas L Naps, Guido Rößling, Vicki Almstrum, Wanda Dann, Rudolf Fleischer, Chris Hundhausen, Ari Korhonen, Lauri Malmi, Myles McNally, Susan Rodger, 2002. Exploring the role of visualization and engagement in computer science education. In Working group reports from ITiCSE on Innovation and technology in computer science education. 131–152.
[34]
Željko Obrenović. 2012. Rethinking HCI education: teaching interactive computing concepts based on the experiential learning paradigm. interactions 19, 3 (2012), 66–70.
[35]
Cathy O’neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
[36]
Kris Powers, Paul Gross, Steve Cooper, Myles McNally, Kenneth J Goldman, Viera Proulx, and Martin Carlisle. 2006. Tools for teaching introductory programming: what works?. In Proceedings of the 37th SIGCSE technical symposium on Computer science education. 560–561.
[37]
Jay Quedado. 2021. Fairness Visualization Tools in ML Education. https://molochxte.github.io/ML-Fairness-Tools-in-Education/
[38]
Google Brain Team. 2015. Tensorflow. https://themis-ml.readthedocs.io/en/latest/
[39]
K Varshney. 2018. Introducing AI fairness 360. IBM Research blog, September 19 (2018).
[40]
Matthijs Vincent. 2019. scikit-fairness. https://github.com/koaning/scikit-fairness
[41]
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics 26, 1(2019), 56–65.
[42]
Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. 2017. Learning non-discriminatory predictors. In Conference on Learning Theory. PMLR, 1920–1953.
[43]
Matthew Zook, Solon Barocas, Danah Boyd, Kate Crawford, Emily Keller, Seeta Peña Gangadharan, Alyssa Goodman, Rachelle Hollander, Barbara A Koenig, Jacob Metcalf, 2017. Ten simple rules for responsible big data research.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI EA '22: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems
April 2022
3066 pages
ISBN:9781450391566
DOI:10.1145/3491101
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 April 2022

Check for updates

Author Tags

  1. Ethics
  2. Fairness
  3. Tools

Qualifiers

  • Extended-abstract
  • Research
  • Refereed limited

Conference

CHI '22
Sponsor:
CHI '22: CHI Conference on Human Factors in Computing Systems
April 29 - May 5, 2022
LA, New Orleans, USA

Acceptance Rates

Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)148
  • Downloads (Last 6 weeks)8
Reflects downloads up to 09 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media