skip to main content
10.1145/3461702.3462528acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

Fairness and Data Protection Impact Assessments

Published: 30 July 2021 Publication History

Abstract

In this paper, we critically examine the effectiveness of the requirement to conduct a Data Protection Impact Assessment (DPIA) in Article 35 of the General Data Protection Regulation (GDPR) in light of fairness metrics. Through this analysis, we explore the role of the fairness principle as introduced in Article 5(1)(a) and its multifaceted interpretation in the obligation to conduct a DPIA. Our paper argues that although there is a significant theoretical role for the considerations of fairness in the DPIA process, an analysis of the various guidance documents issued by data protection authorities on the obligation to conduct a DPIA reveals that they rarely mention the fairness principle in practice. Our analysis questions this omission, and assesses the capacity of fairness metrics to be truly operationalized within DPIAs. We conclude by exploring the practical effectiveness of DPIA with particular reference to (1) technical challenges that have an impact on the usefulness of DPIAs irrespective of a controller's willingness to actively engage in the process, (2) the context dependent nature of the fairness principle, and (3) the key role played by data controllers in the determination of what is fair.

References

[1]
A29WP. 2017. Article 29 Working Party, Guidelines on Data Protection Impact Assessment (DPIA) and Determining Whether Processing Is--Likely to Result in a High Risk?-ù for the Purposes of Regulation 2016/679 (No WP248 rev.01, 4 October 2017) 1. (2017).
[2]
Julia Angwin, Larson Jeff, Mattu Surya, and Kirchner Lauren. 2016. Machine Bias: There's Software Used Across the Country to Predict Future Criminals and It's Biased Against Blacks. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[3]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2018. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research (2018), 0049124118782533.
[4]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. 77--91.
[5]
Michael Butterworth. 2018. The ICO and artificial intelligence: The role of fairness in the GDPR framework. Computer Law & Security Review, Vol. 34, 2 (2018), 257--268.
[6]
LA Bygrave. 2002. Data protection law: approaching its rationale, logic and limits,(Vol. 10). Information Law Series. The Hague: Kluwer Law International (2002).
[7]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, Vol. 5, 2 (2017), 153--163.
[8]
Angèle Christin, Alex Rosenblat, and Danah Boyd. 2015. Courts and predictive algorithms. Data & Civil Right: Criminal Justice and Civil Rights Primer (2015).
[9]
Damian Clifford and Jef Ausloos. 2018. Data protection and the role of fairness. Yearbook of European Law, Vol. 37 (2018), 130--187.
[10]
Damian Clifford, Inge Graef, and Peggy Valcke. 2019. Pre-formulated Declarations of Data Subject Consent--Citizen-Consumer Empowerment and the Alignment of Data, Consumer and Competition Law Protections. German Law Journal, Vol. 20, 5 (2019), 679--721.
[11]
CNIL. 2018. Commission Nationale Informatique & Libertes, Privacy Impact Assessment Application to IOT Devices. (2018).
[12]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining. 797--806.
[13]
DPC. October 2019. Data Protection Commission, Guidance Note: Guide to Data Protection Impact Assessments (DPIAs). ( October 2019).
[14]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214--226.
[15]
EDPB. 2020. European Data Protection Board, Guidelines 4/2019 on Article 25 Data Protection by Design and by Default (Adopted on 20 October 2020) 1. (2020).
[16]
EDPS. 2014. European Data Protection Supervisor, Privacy and Competitiveness in the Age of Big Data: The Interplay Between Data Protection, Competition Law and Consumer Protection in the Digital Economy .European Data Protection Supervisor.
[17]
EDPS. 2020. European Data Protection Supervisor, Survey on Data Protection Impact Assessments under Article 39 of the Regulation (case 2020-0066, 6 July 2020 1). (2020).
[18]
Sorelle A Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P Hamilton, and Derek Roth. 2019. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the conference on fairness, accountability, and transparency. 329--338.
[19]
Philipp Hacker. 2018. Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. (2018).
[20]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems. 3315--3323.
[21]
Natali Helberger, Frederik Zuiderveen Borgesius, and Agustin Reyna. 2017. The perfect match? A closer look at the relationship between EU consumer law and data protection law. Common Market Law Review, Vol. 54, 5 (2017).
[22]
ICO. 2018. Information Commissioner?-ôs Office, Sample DPIA Template (No20180209v0.3). (2018).
[23]
ICO. 2020. Information Commissioner?-ôs Office, Guidance on AI and data protection (20203006 0.0.39). ICO (2020).
[24]
ICO. 2021. Information Commissioner?-ôs Office, Guidance on Data Protection Impact Assessments (DPIAs) (No 2021101 1.1.64). (2021).
[25]
Heleen Janssen. 2020. An approach for a fundamental rights impact assessment to automated decision-making. International Data Privacy Law, Vol. 10 (2020).
[26]
Atoosa Kasirzadeh and Andrew Smart. 2021. A critique of the use of counterfactuals in fair machine learning. In Fairness, accountability, and Transparency (ACM).
[27]
Amir E Khandani, Adlar J Kim, and Andrew W Lo. 2010. Consumer credit-risk models via machine-learning algorithms. Journal of Banking & Finance, Vol. 34, 11 (2010), 2767--2787.
[28]
Jon Kleinberg. 2018. Inherent trade-offs in algorithmic fairness. In Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems. 40--40.
[29]
Bert-Jaap Koops. 2013. On decision transparency, or how to enhance data protection after the computational turn. Privacy, due process and the computational turn: the philosophy of law meets the philosophy of technology (2013), 189--213.
[30]
Elini Kosta. 2020. Artilce 35. Data Protection Impact Assessment. In The EU General Data ProtectionRegulation (GDPR): A Commentary (Christopher Kuner et al.).
[31]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems. 4066--4076.
[32]
Orla Lynskey. 2014. Deconstructing data protection: the added-value?-ôof a right to data protection in the EU legal order. International & Comparative Law Quarterly, Vol. 63, 3 (2014), 569--597.
[33]
Gianclaudio Malgieri. 2020. The concept of fairness in the GDPR: a linguistic and contextual interpretation. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 154--166.
[34]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019).
[35]
Paul Nemitz. 2018. Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 376, 2133 (2018), 20180089.
[36]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science, Vol. 366, 6464 (2019), 447--453.
[37]
Information Commissioner?-ôs Office. 2017. Big Data, Artificial Intelligence, Machine Learning and Data Protection (No 20170904). Version: 2.2 (2017).
[38]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration. In Advances in Neural Information Processing Systems. 5680--5689.
[39]
Karen Yeung. 2018. A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. Council of Europe) I-AUT2018) 05 29 (2018).

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
July 2021
1077 pages
ISBN:9781450384735
DOI:10.1145/3461702
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 July 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. algorithmic fairness
  2. data protection impact assessments
  3. ethics of artificial intelligence
  4. fairness principle
  5. general data protection regulation
  6. regulation of artificial intelligence

Qualifiers

  • Research-article

Conference

AIES '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Upcoming Conference

AIES '24
AAAI/ACM Conference on AI, Ethics, and Society
October 21 - 23, 2024
San Jose , CA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)148
  • Downloads (Last 6 weeks)7
Reflects downloads up to 09 Oct 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media