skip to main content
10.1145/3514094.3534188acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open access

Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy

Published: 27 July 2022 Publication History

Abstract

Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals of distributive justice, as formulated by political and legal philosophers. The perspectives of feminist political philosophers on social justice, by contrast, have been largely neglected. Some feminist philosophers have criticized the local scope of the paradigm of distributive justice and have proposed corrective amendments to surmount its limitations. The present paper brings some key insights of feminist political philosophy to algorithmic fairness. The paper has three goals. First, I show that algorithmic fairness does not accommodate structural injustices in its current scope. Second, I defend the relevance of structural injustices -- as pioneered in the contemporary philosophical literature by Iris Marion Young -- to algorithmic fairness. Third, I take some steps in developing the paradigm of 'responsible algorithmic fairness' to correct for errors in the current scope and implementation of algorithmic fairness. I close by some reflections of directions for future research.

Supplementary Material

MP4 File (aies191.mp4)
Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases by proposing mathematical metrics of fairness typically motivated by some ideals of local distributive justice. Some political philosophers have criticized the paradigm of local distributive justice and have proposed corrective amendments to surmount its limitations. I draw on some key insights of these efforts in the pursuit of three goals. (1) I show that algorithmic fairness does not easily accommodate structural injustices in its current scope. (2) I defend the relevance of structural injustices to algorithmic fairness. (3) I take some steps in developing the paradigm of 'responsible algorithmic fairness' to correct for errors in the current scope and implementation of algorithmic fairness.

References

[1]
Julia Angwin, Larson Jeff, Mattu Surya, and Kirchner Lauren. 2016. Machine Bias: There's Software Used Across the Country to Predict Future Criminals and It's Biased Against Blacks.
[2]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning. fairmlbook.org. http://www.fairmlbook.org.
[3]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2021. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research 50, 1 (2021), 3--44.
[4]
Elettra Bietti. 2020. From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 210--219.
[5]
Reuben Binns. 2018. Fairness in machine learning: Lessons from political philosophy. In Conference on Fairness, Accountability and Transparency. 149--159.
[6]
Peter Michael Blau. 1977. Inequality and heterogeneity: A primitive theory of social structure. Vol. 7. Free Press New York.
[7]
Pierre Bourdieu. 1984. Distinction: A social critique of the judgment of taste. Harvard University Press.
[8]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. 77--91.
[9]
Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183--186.
[10]
Alexandra Chouldechova and Aaron Roth. 2020. A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63, 5 (2020), 82--89.
[11]
Harry M Collins. 1990. Artificial experts: Social knowledge and intelligent machines. MIT press.
[12]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018).
[13]
William Dieterich, Christina Mendoza, and Tim Brennan. 2016. COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Technical report, Northpole 7, 4 (2016).
[14]
Mateusz Dolata, Stefan Feuerriegel, and Gerhard Schwabe. 2021. A sociotechnical view of algorithmic fairness. Information Systems Journal (2021).
[15]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214--226.
[16]
Ronald Dworkin. 1981. What is Equality? Part1: Equality of Welfare. Philosophy and Public Affairs 10 (1981), 185--246.
[17]
Ronald Dworkin. 1981. What is Equality? Part2: Equality of Resources. Philosophy and Public Affairs 10 (1981), 283--345.
[18]
Samuel Fleischacker. 2009. A short history of distributive justice. Harvard University Press.
[19]
Anthony W Flores, Kristin Bechtel, and Christopher T Lowenkamp. 2016. False positives, false negatives, and false analyses: A rejoinder to machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. Fed. Probation 80 (2016), 38.
[20]
Peter A French and Howard K Wettstein. 2014. Forward-looking collective responsibility. Wiley Periodicals.
[21]
Iason Gabriel. 2022. Toward a Theory of Justice for Artificial Intelligence. Daedalus 151, 2 (2022), 218--231.
[22]
Anthony Giddens. 1979. Central problems in social theory: Action, structure, and contradiction in social analysis. MacMillan Press Ltd.
[23]
Robert E Goodin and Christian Barry. 2021. Responsibility for structural injustice: A third thought. Politics, Philosophy & Economics (2021), 1470594X211027257.
[24]
Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. 2018. Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
[25]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016).
[26]
Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning. PMLR, 1929--1938.
[27]
Hoda Heidari, Claudio Ferrari, Krishna Gummadi, and Andreas Krause. 2018. Fairness behind a veil of ignorance: A welfare analysis for automated decision making. In Advances in Neural Information Processing Systems. 1265--1276.
[28]
Hoda Heidari, Michele Loi, Krishna P Gummadi, and Andreas Krause. 2019. A moral framework for understanding fair ml through economic models of equality of opportunity. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 181--190.
[29]
Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (2019), 900--915.
[30]
Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2016. Rawlsian fairness for machine learning. arXiv preprint arXiv:1610.09559 1, 2 (2016), 19.
[31]
Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. 2016. Fairness in learning: Classic and contextual bandits. Advances in neural information processing systems 29 (2016).
[32]
Atoosa Kasirzadeh and Damian Clifford. 2021. Fairness and Data Protection Impact Assessments. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 146--153.
[33]
Atoosa Kasirzadeh and Colin Klein. 2021. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 618--626.
[34]
Brendan F Klare, Mark J Burge, Joshua C Klontz, Richard W Vorder Bruegge, and Anil K Jain. 2012. Face recognition performance: Role of demographic information. IEEE Transactions on Information Forensics and Security 7, 6 (2012), 1789--1801.
[35]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. Advances in neural information processing systems 30 (2017).
[36]
Julian Lamont. 2017. Distributive justice. Routledge.
[37]
Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How we analyzed the COMPAS recidivism algorithm. ProPublica (5 2016) 9, 1 (2016), 3--3.
[38]
Martinez, Emmanuel and Lauren Kirchner. 2021. The Secret Bias Hidden in Mortgage-Approval Algorithms. https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms, Last accessed on 2022-02--24.
[39]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 6 (2021), 1--35.
[40]
David Miller. 2008. National responsibility and global justice. Critical review of international social and political philosophy 11, 4 (2008), 383--399.
[41]
Shira Mitchell, Eric Potash, Solon Barocas, Alexander D'Amour, and Kristian Lum. 2021. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application 8 (2021), 141--163.
[42]
Robert Nozick. 1973. Distributive justice. Philosophy & Public Affairs (1973), 45--126.
[43]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447--453.
[44]
Ariel Procaccia. 2019. AI Researchers Are Pushing Bias Out of Algorithms. https://www.bloomberg.com/opinion/articles/2019-03-07/ai-researchers-are-pushing-bias-out-of-algorithms, Last accessed on 2022-02--24.
[45]
John Rawls. 1971. A theory of justice. Oxford: Oxford University Press.
[46]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency. 59--68.
[47]
Lucy A Suchman. 1987. Plans and situated actions: The problem of human-machine communication. Cambridge university press.
[48]
Ben Wagner. 2018. Ethics as an escape from regulation. From "ethics-washing" to ethics-shopping? In Being Profiled. Amsterdam University Press, 84--89.
[49]
Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Courtney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2022. Taxonomy of Risks Posed by Language Models. In 2022 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, New York, NY, USA, 214--229. https://doi.org/10.1145/3531146.3533088
[50]
Iris Marion Young. 2000. Inclusion and democracy. Oxford University Press.
[51]
Iris Marion Young. 2001. Equality of whom? Social groups and judgments of injustice. Journal of political philosophy 9, 1 (2001), 1--18.
[52]
Iris Marion Young. 2002. Lived body vs gender: Reflections on social structure and subjectivity. Ratio 15, 4 (2002), 410--428.
[53]
Iris Marion Young. 2006. Responsibility and global justice: A social connection model. Social philosophy and policy 23, 1 (2006), 102--130.
[54]
Iris Marion Young. 2011. Justice and the Politics of Difference. Princeton University Press.
[55]
Iris Marion Young. 2011. Responsibility for justice. Oxford University Press.
[56]
Annette Zimmermann and Chad Lee-Stronach. 2021. Proceed with Caution. Canadian Journal of Philosophy (2021), 1--20.

Cited By

View all

Index Terms

  1. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
    July 2022
    939 pages
    ISBN:9781450392471
    DOI:10.1145/3514094
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 July 2022

    Check for updates

    Author Tags

    1. algorithmic bias
    2. algorithmic fairness
    3. algorithmic justice
    4. distributive justice
    5. ethical machine learning
    6. ethics of artificial intelligence
    7. feminist philosophy
    8. political philosophy
    9. responsibility
    10. structural injustice

    Qualifiers

    • Research-article

    Conference

    AIES '22
    Sponsor:
    AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
    May 19 - 21, 2021
    Oxford, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 61 of 162 submissions, 38%

    Upcoming Conference

    AIES '24
    AAAI/ACM Conference on AI, Ethics, and Society
    October 21 - 23, 2024
    San Jose , CA , USA

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)568
    • Downloads (Last 6 weeks)59
    Reflects downloads up to 09 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media