skip to main content
10.1145/3442188.3445896acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Re-imagining Algorithmic Fairness in India and Beyond

Published: 01 March 2021 Publication History

Abstract

Conventional algorithmic fairness is West-centric, as seen in its subgroups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.

References

[1]
2018. National Strategy for Artificial Intelligence #AI4ALL. Niti Aayog.
[2]
2020. Citizen COP Foundation. https://www.citizencop.org
[3]
2020. Deep Learning Indaba. https://deeplearningindaba.com/2020/
[4]
2020. Design Beku. https://designbeku.in/
[5]
2020. India used facial recognition tech to identify 1,100 individuals at a recent riot | TechCrunch. https://techcrunch.com/2020/03/11/india-used-facial-recognition-tech-to-identify-1100-individuals-at-a-recent-riot. (Accessed on 07/28/2020).
[6]
2020. Internet Freedom Foundation. https://internetfreedom.in/
[7]
2020. Khipu AI. https://github.com/khipu-ai
[8]
2020. Lacuna Fund. https://lacunafund.org/
[9]
2020. Safetipin. https://safetipin.com/
[10]
2020. SEWA. http://www.sewa.org/
[11]
Delna Abraham and Ojaswi Rao. [n.d.]. 84% Dead In Cow-Related Violence Since 2010 Are Muslim; 97% Attacks After 2014 | IndiaSpend. https://archive.indiaspend.com/cover-story/86-dead-in-cow-related-violence-since-2010-are-muslim-97-attacks-after-2014-2014. (Accessed on 08/16/2020).
[12]
Oshin Agarwal, Yinfei Yang, Byron C Wallace, and Ani Nenkova. 2020. Entity-Switched Datasets: An Approach to Auditing the In-Domain Robustness of Named Entity Recognition Models. arXiv preprint arXiv:2004.04123 (2020).
[13]
Digital Government Agency (Agesic). 2019. Artificial Intelligence for the digital government | English version. https://www.gub.uy/agencia-gobierno-electronico-sociedad-informacion-conocimiento/sites/agencia-gobierno-electronico-sociedad-informacion-conocimiento/files/documentos/publicaciones/IA%20Strategy%20-20english%20version.pdf. In AI whitepaper.
[14]
Varun Aggarwal. 2018. India's mess of complexity is just what AI needs | MIT Technology Review. https://www.technologyreview.com/2018/06/27/240474/indias-mess-of-complexity-is-just-what-ai-needs/. (Accessed on 09/18/2020).
[15]
Saumya Agrawal. 2020. Chutia| 'Chutia not slang, but community where I belong': Assam woman's online job application rejected due to surname | Trending & Viral News. https://www.timesnownews.com/the-buzz/article/chutia-not-slang-but-community-where-i-belong-assam-womans-online-job-application-rejected-due-to-surname/625556. (Accessed on 09/28/2020).
[16]
Amazon. 2020. We are implementing a one-year moratorium on police use of Rekognition. https://blog.aboutamazon.com/policy/we-are-implementing-a-one-year-moratorium-on-police-use-of-rekognition. (Accessed on 08/29/2020).
[17]
BR Ambedkar. 1916. Castes in India: Their mechanism, genesis and development (Vol. 1). Columbia: Indian Antiquary. Ambedkar, BR (1936). Annihilation of Caste. Jullundur: Bheem Patrika Publications (1916).
[18]
Bhimrao Ramji Ambedkar. 2014. Annihilation of caste: The annotated critical edition. Verso Books.
[19]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias --- ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. (Accessed on 07/30/2020).
[20]
Arjun Appadurai. 2000. Spectral housing and urban cleansing: notes on millennial Mumbai. Public culture 12, 3 (2000), 627--651.
[21]
Payal Arora. 2016. Bottom of the data pyramid: Big data and the global south. International Journal of Communication 10 (2016), 19.
[22]
Peter M Asaro. 2019. AI ethics in predictive policing: From models of threat to an ethics of care. IEEE Technology and Society Magazine 38, 2 (2019), 40--53.
[23]
Itai Ashlagi, Amin Saberi, and Ali Shameli. 2020. Assignment mechanisms under distributional constraints. Operations Research 68, 2 (2020), 467--479.
[24]
Savita Bailur, Devina Srivastava, and Hélène (Caribou Digital) Smertnik. 2019. Women and ID in a digital age: Five fundamental barriers and new design questions. https://savitabailur.com/2019/09/09/women-and-id-in-a-digital-age-five-fundamental-barriers-and-new-design-questions/. (Accessed on 08/02/2020).
[25]
Robert Baker. 2001. Bioethics and Human Rights: A Historical Perspective. Cambridge Quarterly of Healthcare Ethics 10, 3 (2001), 241--252. https://doi.org/10.1017/S0963180101003048
[26]
Shakuntala Banaji and Ram Bhat. 2019. WhatsApp Vigilantes: An exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India. Department of Media and Communications, LSE.
[27]
Abhijit Banerjee, Marianne Bertrand, Saugato Datta, and Sendhil Mullainathan. 2009. Labor market discrimination in Delhi: Evidence from a field experiment. Journal of comparative Economics 37, 1 (2009), 14--27.
[28]
Soumyarendra Barik. 2020. Facial recognition based surveillance systems to be installed at 983 railway stations across India. https://www.medianama.com/2020/01/223-facial-recognition-system-indian-railways-facial-recognition/. (Accessed on 10/03/2020).
[29]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2017. Fairness in machine learning. NIPS Tutorial 1 (2017).
[30]
Solon Barocas and Andrew D Selbst. 2016. Big data's disparate impact. Calif. L. Rev. 104 (2016), 671.
[31]
Surender Baswana, Partha Pratim Chakrabarti, Sharat Chandran, Yashodhan Kanoria, and Utkarsh Patange. 2019. Centralized admissions for engineering colleges in India. INFORMS Journal on Applied Analytics 49, 5 (2019), 338--354.
[32]
Abhishek Baxi. 2018. Law Enforcement Agencies In India Are Using Artificial Intelligence To Nab Criminals. https://www.forbes.com/sites/baxiabhishek/2018/09/28/law-enforcement-agencies-in-india-are-using-artificial-intelligence-to-nab-criminals-heres-how. (Accessed on 08/30/2020).
[33]
BBC. 2019. Nirbhaya case: Four Indian men executed for 2012 Delhi bus rape and murder - BBC News. https://www.bbc.com/news/world-asia-india-51969961. (Accessed on 09/01/2020).
[34]
Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics 6 (2018), 587--604.
[35]
Andre Beteille. 1990. Race, caste and gender. Man (1990), 489--504.
[36]
Naveen Bharathi, Deepak V Malghan, and Andaleeb Rahman. 2018. Isolated by caste: Neighbourhood-scale residential segregation in Indian metros. IIM Bangalore Research Paper 572 (2018).
[37]
Anubha Bhonsle and Pallavi Prasad. 2020. Counting cows, not rural health indicators. https://ruralindiaonline.org/articles/counting-cows-not-rural-health-indicators/. (Accessed on 08/02/2020).
[38]
Reuben Binns. 2018. Fairness in machine learning: Lessons from political philosophy. In Conference on Fairness, Accountability and Transparency. 149--159.
[39]
Abeba Birhane. 2020. Algorithmic colonization of Africa. SCRIPTed 17 (2020), 389.
[40]
PR Blake, K McAuliffe, J Corbit, TC Callaghan, O Barry, A Bowie, L Kleutsch, KL Kramer, E Ross, H Vongsachang, et al. 2015. The ontogeny of fairness in seven societies. Nature 528, 7581 (2015), 258--261.
[41]
Alexander Bogner, Beate Littig, and Wolfgang Menz. 2009. Interviewing experts. Springer.
[42]
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems. 4349--4357.
[43]
Vani K Borooah, Amaresh Dubey, and Sriya Iyer. 2007. The effectiveness of jobs reservation: caste, religion and economic status in India. Development and change 38, 3 (2007), 423--445.
[44]
C Boyes-Watson. 2014. Suffolk University, College of Arts & Sciences. Center for Restorative Justice. Retrieved on November 28 (2014), 2015.
[45]
Eric Brewer, Michael Demmer, Bowei Du, Melissa Ho, Matthew Kam, Sergiu Nedevschi, Joyojeet Pal, Rabin Patra, Sonesh Surana, and Kevin Fall. 2005. The case for technology in developing regions. Computer 38, 6 (2005), 25--38.
[46]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. 77--91.
[47]
Ministry of Statistics Central Statistics Office and Programme Implementation. 2018. Women and Men in India: A statistical compilation of Gender related Indicators in India. Technical Report. Government of India.
[48]
Uma Chakravarti. 1993. Conceptualising Brahmanical patriarchy in early India: Gender, caste, class and state. Economic and Political Weekly (1993), 579--585.
[49]
Maitrayee Chaudhuri. 2004. Feminism in India. (2004).
[50]
Anna Clark. 2013. ZIP Code History: How They Define Us | The New Republic. https://newrepublic.com/article/112558/zip-code-history-how-they-define-us. (Accessed on 09/24/2020).
[51]
Andrew Cotter, Heinrich Jiang, Maya R Gupta, Serena Wang, Taman Narayan, Seungil You, and Karthik Sridharan. 2019. Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals. Journal of Machine Learning Research 20, 172 (2019), 1--59.
[52]
Kate Crawford. 2013. The hidden biases in big data. Harvard business review 1, 1 (2013), 814.
[53]
Kate Crawford. 2013. Think again: Big data. Foreign Policy 9 (2013).
[54]
William Crumpler. 2020. How Accurate are Facial Recognition Systems - and Why Does It Matter? | Center for Strategic and International Studies. (Accessed on 07/28/2020).
[55]
Camera Culture. 2018. Economic Impact of Discoverability of Localities and Addresses in India --- Emerging Worlds. http://mitemergingworlds.com/blog/2018/2/12/economic-impact-of-discoverability-of-localities-and-addresses-in-india. (Accessed on 09/24/2020).
[56]
Abdi Lahir Dahir. 2019. Mobile loans apps Tala, Branch, Okash face scrutiny in Kenya --- Quartz Africa. https://qz.com/africa/1712796/mobile-loans-apps-tala-branch-okash-face-scrutiny-in-kenya/. (Accessed on 08/04/2020).
[57]
Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. arXiv preprint arXiv:1905.12516 (2019).
[58]
The Living New Deal. [n.d.]. African Americans. https://livingnewdeal.org/what-was-the-new-deal/new-deal-inclusion/african-americans-2/. (Accessed on 08/29/2020).
[59]
Mark Diaz, Isaac Johnson, Amanda Lazar, Anne Marie Piper, and Darren Gergle. 2018. Addressing Age-Related Bias in Sentiment Analysis. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3173574.3173986
[60]
Neha Dixit. July. Fair, But Not So Lovely: India's Obsession With Skin Whitening | by Neha Dixit | BRIGHT Magazine. https://brightthemag.com/fair-but-not-so-lovely-indias-obsession-with-skin-whitening-beauty-body-image-bleaching-4d6ba9c9743d. (Accessed on 09/25/2020).
[61]
Pranav Dixit. 2019. India Is Creating A National Facial Recognition System. https://www.buzzfeednews.com/article/pranavdixit/india-is-creating-a-national-facial-recognition-system-and. (Accessed on 08/30/2020).
[62]
Roel Dobbe, Sarah Dean, Thomas Gilbert, and Nitin Kohli. 2018. A broader view on bias in automated decision-making: Reflecting on epistemology and dynamics. arXiv preprint arXiv:1807.00553 (2018).
[63]
Jonathan Donner. 2015. After access: Inclusion, development, and a more mobile Internet. MIT press.
[64]
Jonathan Donner, Nimmi Rangaswamy, M Steenson, and Carolyn Wei. 2008. "Express yourself" / "Stay together": Tensions surrounding mobile communication in the middle-class Indian family. J. Katz (Ed.), Handbook of mobile communication studies (2008), 325--337.
[65]
Kevin P Donovan. 2015. The biometric imaginary: Bureaucratic technopolitics in post-apartheid welfare. Journal of Southern African Studies 41, 4 (2015), 815--833.
[66]
Ariel Dorfman and Armand Mattelart. 1975. How to Read Donald Duck. International General New York.
[67]
Susan Dray, Ann Light, A Dearden, Vanessa Evers, Melissa Densmore, D Ramachandran, M Kam, G Marsden, N Sambasivan, T Smyth, et al. 2012. Human--Computer Interaction for Development: Changing Human--Computer Interaction to Change the World. In The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications, Third Edition. CRC press, 1369--1394.
[68]
Esther Duflo. 2005. Why political reservations? Journal of the European Economic Association 3, 2-3 (2005), 668--678.
[69]
Bert D'espallier, Isabelle Guérin, and Roy Mersland. 2011. Women and repayment in microfinance: A global analysis. World development 39, 5 (2011), 758--772.
[70]
Arturo Escobar. 2011. Encountering development: The making and unmaking of the Third World. Vol. 1. Princeton University Press.
[71]
Indian Express. 2020. Most Indian Nobel winners Brahmins: Gujarat Speaker Rajendra Trivedi. https://indianexpress.com/article/cities/ahmedabad/most-indian-nobel-winners-brahmins-gujarat-speaker-rajendra-trivedi-6198741/. (Accessed on 09/04/2020).
[72]
Frantz Fanon. 2007. The wretched of the earth. Grove/Atlantic, Inc.
[73]
Thomas B Fitzpatrick. 1988. The validity and practicality of sun-reactive skin types I through VI. Archives of dermatology 124, 6 (1988), 869--871.
[74]
Rikin Gandhi, Rajesh Veeraraghavan, Kentaro Toyama, and Vanaja Ramprasad. 2007. Digital green: Participatory video for agricultural extension. In 2007 International conference on information and communication technologies and development. IEEE, 1--10.
[75]
Harris Gardiner. 2013. 5 in New Delhi Rape Case Face Murder Charges- The New York Times. https://www.nytimes.com/2013/01/04/world/asia/murder-charges-filed-against-5-men-in-india-gang-rape.html. (Accessed on 09/13/2020).
[76]
Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi, and Alex Beutel. 2019. Counterfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 219--226.
[77]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for datasets. arXiv preprint arXiv:1803.09010 (2018).
[78]
Michael Golebiewski and Danah Boyd. 2019. Data voids: Where missing data can easily be exploited. Data & Society (2019).
[79]
Masahiro Goto, Fuhito Kojima, Ryoji Kurata, Akihisa Tamura, and Makoto Yokoo. 2017. Designing matching mechanisms under general distributional constraints. American Economic Journal: Microeconomics 9, 2 (2017), 226--62.
[80]
Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik, and Peter H Ditto. 2013. Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology. Vol. 47. Elsevier, 55--130.
[81]
Ben Green. 2020. The false promise of risk assessments: epistemic reform and the limits of fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 594--606.
[82]
Ben Green and Salomé Viljoen. 2020. Algorithmic realism: expanding the boundaries of algorithmic thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 19--31.
[83]
Akhil Gupta. 2012. Red tape: Bureaucracy, structural violence, and poverty in India. Duke University Press.
[84]
Alexa Hagerty and Igor Rubinov. 2019. Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence. arXiv (2019), arXiv-1907.
[85]
Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 501--512.
[86]
Kurtis Heimerl, Shaddi Hasan, Kashif Ali, Eric Brewer, and Tapan Parikh. 2013. Local, sustainable, small-scale cellular networks. In Proceedings of the Sixth International Conference on Information and Communication Technologies and Development: Full Papers-Volume 1. 2--12.
[87]
Virginia Held et al. 2006. The ethics of care: Personal, political, and global. Oxford University Press on Demand.
[88]
David A Hollinger. 1998. Science, Jews, and secular culture: studies in mid-twentieth-century American intellectual history. Princeton University Press.
[89]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--16.
[90]
Ben Hutchinson and Margaret Mitchell. 2019. 50 years of test (un) fairness: Lessons for machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 49--58.
[91]
Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social Biases in NLP Models as Barriers for Persons with Disabilities. ACL (2020).
[92]
IDSN. 2010. Two thirds of India's Dalits are poor - International Dalit Solidarity Network. https://idsn.org/two-thirds-of-indias-dalits-are-poor/. (Accessed on 08/13/2020).
[93]
IEEE. 2019. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. "Classical Ethics in A/IS". In Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. 36--67.
[94]
S Inayatullah. 2006. Culture and Fairness: The Idea of Civilization Fairness. In Fairness, Globalization and Public Institutions. University of Hawaii Press, 31--33.
[95]
Azra Ismail and Neha Kumar. 2018. Engaging solidarity in data collection practices for community health. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 1--24.
[96]
Mayank Jain. 2016. India's internet population is exploding but women are not logging in. Scroll.in (26 9 2016). https://scroll.in/article/816892/indias-internet-population-is-exploding-but-women-are-not-logging-inia
[97]
Rob Jenkins and Anne Marie Goetz. 1999. Accounts and accountability: theoretical implications of the right-to-information movement in India. Third world quarterly 20, 3 (1999), 603--622.
[98]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (2019), 389--399.
[99]
Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2016. Rawlsian fairness for machine learning. arXiv preprint arXiv:1610.09559 1, 2 (2016).
[100]
Divij Joshi. 2020. AI Observatory. http://ai-observatory.in/. (Accessed on 12/30/2020).
[101]
Yuvraj Joshi. 2018. Racial Indirection. UCDL Rev. 52 (2018), 2495.
[102]
Rishi Ranjan Kala. 2019. High gender disparity among internet users in India - The Financial Express. https://www.financialexpress.com/industry/high-gender-disparity-among-internet-users-in-india/1718951/. (Accessed on 10/06/2020).
[103]
Nathan Kallus and Angela Zhou. 2018. Residual unfairness in fair machine learning from prejudiced data. arXiv preprint arXiv:1806.02887 (2018).
[104]
Shivaram Kalyanakrishnan, Rahul Alex Panicker, Sarayu Natarajan, and Shreya Rao. 2018. Opportunities and Challenges for Artificial Intelligence in India. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 164--170.
[105]
Yuichiro Kamada and Fuhito Kojima. 2015. Efficient matching under distributional constraints: Theory and applications. American Economic Review 105, 1 (2015), 67--99.
[106]
Anant Kamath and Vinay Kumar. 2017. In India, Accessible Phones Lead to Inaccessible Opportunities. https://thewire.in/caste/india-accessible-phones-still-lead-inaccessible-opportunities. (Accessed on 01/14/2021).
[107]
Divya Kandukuri. 2018. Casteist Slurs You Need To Know- YouTube. https://www.youtube.com/watch?v=wJwkIxOpqZA. (Accessed on 09/25/2020).
[108]
Kavita Karan. 2008. Obsessions with fair skin: Color discourses in Indian advertising. Advertising & society review 9, 2 (2008).
[109]
Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Bintz, Daniella Raz, and PM Krafft. 2020. Toward situated interventions for algorithmic equity: lessons from the field. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 45--55.
[110]
Patrick Gage Kelley, Yongwei Yang, Courtney Heldreth, Christopher Moessner, Aaron Sedley, Andreas Kramm, David Newman, and Allison Woodruff. 2019. "Happy and Assured that life will be easy 10years from now.": Perceptions of Artificial Intelligence in 8 Countries. arXiv preprint arXiv:2001.00081 (2019).
[111]
Rachna Khaira. 2020. Surveillance Slavery: Swachh Bharat Tags Sanitation Workers To Live-Track Their Every Move | HuffPost India. https://www.huffingtonpost.in/entry/swacch-bharat-tags-sanitation-workers-to-live-track-their-every-move_in_5e4c98a9c5b6b0f6bff11f9b?guccounter=1. (Accessed on 07/28/2020).
[112]
Srinivas Kodali. 2020. Aarogya Setu: A bridge too far? | Deccan Herald. https://www.deccanherald.com/specials/sunday-spotlight/aarogya-setu-a-bridge-too-far-835691.html. (Accessed on 08/01/2020).
[113]
Ava Kofman. 2016. How Facial Recognition Can Ruin Your Life - Intercept. https://theintercept.com/2016/10/13/how-a-facial-recognition-mismatch-can-ruin-your-life/. (Accessed on 07/30/2020).
[114]
Nitin Kohli, Renata Barreto, and Joshua A Kroll. 2018. Translation tutorial: a shared lexicon for research and practice in human-centered software systems. In 1st Conference on Fairness, Accountability, and Transparancy. New York, NY, USA, Vol. 7.
[115]
Neha Kumar and Richard J Anderson. 2015. Mobile phones for maternal health in rural India. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 427--436.
[116]
Sunil Mitra Kumar. 2013. Does access to formal agricultural credit depend on caste? World Development 43 (2013), 315--328.
[117]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016).
[118]
Michael Kwet. 2019. Digital colonialism: US empire and the new imperialism in the Global South. Race & Class 60, 4 (2019), 3--26.
[119]
Jonas Lerman. 2013. Big data and its exclusions. Stan. L. Rev. Online 66 (2013), 55.
[120]
Kwok Leung and Walter G Stephan. 2001. Social Justice from a Cultural Perspective. (2001).
[121]
Kristian Lum and William Isaac. 2016. To predict and serve? Significance 13, 5 (2016), 14--19.
[122]
Donald J Lund, Lisa K Scheer, and Irina V Kozlenkova. 2013. Culture's impact on the importance of fairness in interorganizational relationships. Journal of International Marketing 21, 4 (2013), 21--43.
[123]
Ruth Macklin. 2004. Double standards in medical research in developing countries. Vol. 2. Cambridge University Press.
[124]
Subramaniam Madheswaran and Paul Attewell. 2007. Caste discrimination in the Indian urban labour market: Evidence from the National Sample Survey. Economic and political Weekly (2007), 4146--4153.
[125]
Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 615--621.
[126]
Vidushi Marda. 2018. Artificial intelligence policy in India: a framework for engaging the limits of data-driven decision-making. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, 2133 (2018), 20180087.
[127]
Vidushi Marda and Shivangi Narayan. 2020. Data in New Delhi's predictive policing system. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 317--324.
[128]
Donald Martin Jr, Vinod Prabhakaran, Jill Kuhlberg, Andrew Smart, and William S Isaac. 2020. Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics. ICLR Workshop on Machine Learning in Real Life (ML-IRL) (2020).
[129]
Emma. Martinho-Truswell, Hannah. Miller, Isak Nti Asare, Andre Petheram, Richard (Oxford Insights) Stirling, Constanza Gómez Mont, and Cristina (C Minds) Martinez. 2018. Towards an AI strategy in Mexico: Harnessing the AI revolution. In AI whitepaper.
[130]
Rachel Masika and Savita Bailur. 2015. Negotiating women's agency through ICTs: A comparative study of Uganda and India. Gender, Technology and Development 19, 1 (2015), 43--69.
[131]
Achille Mbembe. 2015. Decolonizing knowledge and the question of the archive.
[132]
Indrani Medhi, Aman Sagar, and Kentaro Toyama. 2006. Text-free user interfaces for illiterate and semi-literate users. In 2006 international conference on information and communication technologies and development. IEEE, 72--82.
[133]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019).
[134]
Walter Mignolo. 2011. The darker side of western modernity: Global futures, decolonial options. Duke University Press.
[135]
Government of India Ministry of Home Affairs. [n.d.]. 2011 Census Data. https://www.censusindia.gov.in/2011-Common/CensusData2011.html. (Accessed on 08/26/2020).
[136]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220--229.
[137]
Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology (2020), 1--26.
[138]
Angel Mohan. 2018. Why Urban Indian Women Turn Down Job Opportunities Away From Home |. https://www.indiaspend.com/why-urban-indian-women-turn-down-job-opportunities-away-from-home- 94002/. (Accessed on 09/25/2020).
[139]
Chandra Talpade Mohanty. 2005. Feminism without borders: Decolonizing theory, practicing solidarity. Zubaan.
[140]
Anahita Mukherji. [n.d.]. The Cisco Case Could Expose Rampant Prejudice Against Dalits in Silicon Valley. https://thewire.in/caste/cisco-caste-discrimination-silicon-valley-dalit-prejudice. (Accessed on 08/14/2020).
[141]
Geoff Mulgan and Vincent Straub. [n.d.]. The new ecosystem of trust: how data trusts, collaboratives and coops can help govern data for the maximum public benefit | Nesta. https://www.nesta.org.uk/blog/new-ecosystem-trust/. (Accessed on 08/21/2020).
[142]
Deirdre K Mulligan, Joshua A Kroll, Nitin Kohli, and Richmond Y Wong. 2019. This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1--36.
[143]
Anand Murali. 2019. How India's data labellers are powering the global AI race | FactorDaily. https://factordaily.com/indian-data-labellers-powering-the-global-ai-race/. (Accessed on 09/13/2020).
[144]
Lisa P Nathan, Michelle Kaczmarek, Maggie Castor, Shannon Cheng, and Raquel Mann. 2017. Good for Whom? Unsettling Research Practice. In Proceedings of the 8th International Conference on Communities and Technologies. 290--297.
[145]
Newslaundry and Oxfam India. 2019. Who Tells Our Stories Matters: Representation of Marginalised Caste Groups in Indian Newsrooms. (8 2019).
[146]
Helen Nissenbaum. 1996. Accountability in a computerized society. Science and engineering ethics 2, 1 (1996), 25--42.
[147]
Rodrigo Ochigame. 2020. The Long History of Algorithmic Fairness. Phenomenal World (2020).
[148]
Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2019. Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data 2 (2019), 13.
[149]
Joyojeet Pal. 2008. Computers and the promise of development: aspiration, neoliberalism and "technolity" in India's ICTD enterprise. A paper presented at confronting the Challenge of Technology for Development: Experiences from the BRICS (2008), 29--30.
[150]
Joyojeet Pal. 2015. Banalities turned viral: Narendra Modi and the political tweet. Television & New Media 16, 4 (2015), 378--387.
[151]
Lawrence A Palinkas, Sarah M Horwitz, Carla A Green, Jennifer P Wisdom, Naihua Duan, and Kimberly Hoagwood. 2015. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and policy in mental health and mental health services research 42, 5 (2015), 533--544.
[152]
Rohini Pande. 2003. Can mandated political representation increase policy influence for disadvantaged minorities? Theory and evidence from India. American Economic Review 93, 4 (2003), 1132--1151.
[153]
Kundan Pandey. 2020. COVID-19 lockdown highlights India's great digital divide. https://www.downtoearth.org.in/news/governance/covid-19-lockdown-highlights-india-s-great-digital-divide-72514. (Accessed on 01/14/2021).
[154]
Priti Patnaik. 2012. Social audits in India - a slow but sure way to fight corruption. https://www.theguardian.com/global-development/poverty-matters/2012/jan/13/india-social-audits-fight-corruption. (Accessed on 08/21/2020).
[155]
Amy Paul, C Jolley, and Aubra Anthony. 2018. Reflecting the Past, Shaping the Future: Making AI Work for International Development. USAID. gov (2018).
[156]
Vinodkumar Prabhakaran, Ben Hutchinson, and Margaret Mitchell. 2019. Perturbation Sensitivity Analysis to Detect Unintended Model Biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 5744--5749.
[157]
Ashwin Rajadesingan, Ramaswami Mahalingam, and David Jurgens. 2019. Smart, Responsible, and Upper Caste Only: Measuring Caste Attitudes through Large-Scale Analysis of Matrimonial Profiles. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13. 393--404.
[158]
Usha Ramanathan. 2014. Biometrics use for social protection programmes in India-Risk: Violating human rights of the poor. United Nations Research Institute for Social Development 2 (2014).
[159]
Usha Ramanathan. 2015. Considering Social Implications of Biometric Registration: A Database Intended for Every Citizen in India [Commentary]. IEEE Technology and Society Magazine 34, 1 (2015), 10--16.
[160]
C. Rangarajan, S. Mahendra Dev, K. Sundaram, Mahesh Vyas, and K.L Datta. 2014. Report of the Expert Group to Review the Methodology for Measurement of Poverty. Technical Report. Government of India Planning Commission.
[161]
Rebecca Ratclifee. 2019. How a glitch in India's biometric welfare system can be lethal | India | The Guardian. https://www.theguardian.com/technology/2019/oct/16/glitch-india-biometric-welfare-system-starvation. (Accessed on 07/29/2020).
[162]
Sharmila Rege. 1998. Dalit women talk differently: A critique of 'difference' and towards a Dalit feminist standpoint position. Economic and Political Weekly (1998), WS39-WS46.
[163]
World Bank Human Development Unit South Asia Region. 2009. People with Disabilities in India: From Commitments to Outcomes. http://documents1.worldbank.org/curated/en/577801468259486686/pdf/502090WP0Peopl1Box0342042B01PUBLIC1.pdf. (Accessed on 08/26/2020).
[164]
S. Henry Richardson. 2012. Fairness and Political Equality: India and the U.S. https://law.utah.edu/event/fairness-and-political-equality-india-and-the-u-s/.
[165]
Sarah T Roberts. 2016. Digital refuse: Canadian garbage, commercial content moderation and the global circulation of social media's waste. Wi: journal of mobile media (2016).
[166]
Valerian Rodrigues. 2011. Justice as the Lens: Interrogating Rawls through Sen and Ambedkar. Indian Journal of Human Development 5, 1 (2011), 153--174.
[167]
Heather M Roff. 2020. Expected utilitarianism. arXiv preprint arXiv:2008.07321 (2020).
[168]
Oliver Rowntree. 2020. The mobile gender gap report 2020.
[169]
Arundhati Roy. 2014. Capitalism: A ghost story. Haymarket Books.
[170]
RT. 2017. 'Whoever leads in AI will rule the world': Putin to Russian children on Knowledge Day --- RT World News. https://www.rt.com/news/401731-ai-rule-world-putin/. (Accessed on 09/20/2020).
[171]
Cynthia Rudin and Joanna Radin. 2019. Why are we using black box models in AI when we don't need to? A lesson from an explainable AI competition. Harvard Data Science Review 1, 2 (2019).
[172]
Anouk Ruhaak. [n.d.]. Mozilla Foundation - When One Affects Many: The Case For Collective Consent. https://foundation.mozilla.org/en/blog/when-one-affects-many-case-collective-consent/. (Accessed on 08/21/2020).
[173]
Rukmini S. 2020. India's poor are also document-poor. https://www.livemint.com/news/india/india-s-poor-are-also-document-poor-11578300732736.html. (Accessed on 09/13/2020).
[174]
Rukmini S. May. In India, who speaks in English, and where? https://www.livemint.com/news/india/in-india-who-speaks-in-english-and-where-1557814101428.html. (Accessed on 09/25/2020).
[175]
Nithya Sambasivan. 2019. The remarkable illusions of technology for social good. interactions 26, 3 (2019), 64--66.
[176]
Nithya Sambasivan and Paul M Aoki. 2017. Imagined Connectivities: Synthesized Conceptions of Public Wi-Fi in Urban India. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 5917--5928.
[177]
Nithya Sambasivan, Amna Batool, Nova Ahmed, Tara Matthews, Kurt Thomas, Laura Sanely Gaytán-Lugo, David Nemer, Elie Bursztein, Elizabeth Churchill, and Sunny Consolvo. 2019. "They Don't Leave Us Alone Anywhere We Go" Gender and Digital Abuse in South Asia. In proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--14.
[178]
Nithya Sambasivan, Garen Checkley, Amna Batool, Nova Ahmed, David Nemer, Laura Sanely Gaytán-Lugo, Tara Matthews, Sunny Consolvo, and Elizabeth Churchill. 2018. "Privacy is not for me, it's for those rich women": Performative Privacy Practices on Mobile Phones by Women in South Asia. In Fourteenth Symposium on Usable Privacy and Security ({SOUPS} 2018). 127--142.
[179]
Nithya Sambasivan, Ed Cutrell, Kentaro Toyama, and Bonnie Nardi. 2010. Intermediated technology use in developing communities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2583--2592.
[180]
Nithya Sambasivan and Jess Holbrook. 2018. Toward responsible AI for the next billion users. interactions 26, 1 (2018), 68--71.
[181]
Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora Aroyo. 2021. "Everyone wants to do the model work, not the data work": Data Cascades in High-Stakes AI. In proceedings of the 2021 CHI Conference on Human Factors in Computing Systems.
[182]
Nithya Sambasivan and Thomas Smyth. 2010. The human infrastructure of ICTD. In Proceedings of the 4th ACM/IEEE international conference on information and communication technologies and development. 1--9.
[183]
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 1668--1678.
[184]
Nadia Saracini and Murali Shanmugavelan. 2019. BOND: Caste and Development. (2019).
[185]
Marie Schäfer, Daniel BM Haun, and Michael Tomasello. 2015. Fair is not fair everywhere. Psychological science 26, 8 (2015), 1252--1260.
[186]
Amartya Kumar Sen. 2009. The idea of justice. Harvard University Press.
[187]
Sungyong Seo, Hau Chan, P Jeffrey Brantingham, Jorja Leap, Phebe Vayanos, Milind Tambe, and Yan Liu. 2018. Partially generative neural networks for gang crime classification with partial information. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 257--263.
[188]
Aabid shafi. 2018. Disability rights: Wheelchair users cannot access most of Delhi's buses. https://scroll.in/roving/894005/in-photos-why-wheelchair-users-in-delhi-find-it-difficult-to-use-buses-even-low-floor-ones. (Accessed on 09/25/2020).
[189]
Shreya Shah. [n.d.]. #MissionCashless: Few use mobiles, fewer know what internet is in adivasi belts of Madhya Pradesh. https://scroll.in/article/824882/missioncashless-few-use-mobiles-fewer-know-what-internet-is-in-adivasi-belts-of-madhya-pradesh. (Accessed on 08/14/2020).
[190]
Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D Sculley. 2017. No classification without representation: Assessing geodiversity issues in open data sets for the developing world. arXiv preprint arXiv:1711.08536 (2017).
[191]
Murali Shanmugavelan. 2018. Everyday Communicative Practices of Arunthathiyars: The Contribution of Communication Studies to the Analysis of Caste Exclusion and Subordination of a Dalit Community in Tamil Nadu, India. (2018).
[192]
Donghee (Don) Shin. 2019. Toward Fair, Accountable, and Transparent Algorithms: Case Studies on Algorithm Initiatives in Korea and China. Javnost - The Public 26, 3 (2019), 274--290. https://doi.org/10.1080/13183222.2019.1589249 arXiv:https://doi.org/10.1080/13183222.2019.1589249
[193]
Ranjit Singh. 2018. 'The Living Dead'. Whispers from the Field: Ethnographic Poetry and Creative Prose (2018), 29--31.
[194]
Ranjit Singh and Steven J Jackson. 2017. From Margins to Seams: Imbrication, Inclusion, and Torque in the Aadhaar Identification Project. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 4776--4824.
[195]
Dean Spade. 2015. Normal life: Administrative violence, critical trans politics, and the limits of law. Duke University Press.
[196]
Ajantha Subramanian. 2015. Making merit: The Indian Institutes of Technology and the social life of caste. Comparative Studies in Society and History 57, 2 (2015), 291.
[197]
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. arXiv preprint arXiv:1906.08976 (2019).
[198]
Ranjula Bali Swain and Fan Yang Wallentin. 2009. Does microfinance empower women? Evidence from self-help groups in India. International review of applied economics 23, 5 (2009), 541--556.
[199]
Nisha Tamang. 2020. Section 377: Challenges and Changing Perspectives in the Indian Society. Changing Trends in Human Thoughts and Perspectives: Science, Humanities and Culture Part I (2020), 68.
[200]
Divy Thakkar, Nithya Sambasivan, Purva Kulkarni, Pratap Kalenahalli Sudarshan, and Kentaro Toyama. 2018. The Unexpected Entry and Exodus of Women in Computing and HCI in India. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--12.
[201]
David R Thomas. 2006. A general inductive approach for analyzing qualitative evaluation data. American journal of evaluation 27, 2 (2006), 237--246.
[202]
Sukhadeo Thorat and Paul Attewell. 2007. The legacy of social exclusion: A correspondence study of job discrimination in India. Economic and political weekly (2007), 4141--4145.
[203]
Deeptiman Tiwary. 2015. Almost 68 percent inmates undertrials, 70 per cent of convicts illiterate | The Indian Express. https://indianexpress.com/article/india/india-news-india/almost-68-inmates-undertrials-70-of-convicts-illiterate/. (Accessed on 07/28/2020).
[204]
Kentaro Toyama. 2015. Geek heresy: Rescuing social change from the cult of technology. PublicAffairs.
[205]
Tunisia. 2018. National AI Strategy: Unlocking Tunisia's capabilities potential. http://www.anpr.tn/national-ai-strategy-unlocking-tunisias-capabilities-potential/. In AI workshop.
[206]
Mazar Ullah. [n.d.]. Court told design flaws led to Bhopal leak | Environment | The Guardian. https://www.theguardian.com/world/2000/jan/12/1. (Accessed on 08/21/2020).
[207]
Carol Upadhya. 2007. Employment, exclusion and'merit'in the Indian IT industry. Economic and Political weekly (2007), 1863--1868.
[208]
Rajesh Veeraraghavan. 2013. Dealing with the digital panopticon: the use and subversion of ICT in an Indian bureaucracy. In Proceedings of the Sixth International Conference on Information and Communication Technologies and Development: Full Papers-Volume 1. 248--255.
[209]
Srinivasan Vivek, Narayanan Rajendran, Chakraborty Dipanjan, Veeraraghavan Rajesh, and Vardhan Vibhore. 2018. Are technology-enabled cash transfers really 'direct'? Economic and Political Weekly 53, 30 (2018).
[210]
Ngugi Wa Thiong'o. 1992. Decolonising the mind: The politics of language in African literature. East African Publishers.
[211]
Immanuel Wallerstein. 1991. World system versus world-systems: A critique. Critique of Anthropology 11, 2 (1991), 189--194.
[212]
Yilun Wang and Michal Kosinski. 2018. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of personality and social psychology 114, 2 (2018), 246.
[213]
Jayapal website. 2020. Jayapal Joins Colleagues In Introducing Bicameral Legislation to Ban Government Use of Facial Recognition, Other Biometric Technology - Congresswoman Pramila Jayapal. https://jayapal.house.gov/2020/06/25/jayapal-joins-rep-pressley-and-senators-markey-and-merkley-to-introduce-legislation-to-ban-government-use-of-facial-recognition-other-biometric-technology/. (Accessed on 07/30/2020).
[214]
Maranke Wieringa. 2020. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 1--18.
[215]
Virginius Xaxa. 2011. Tribes and social exclusion. CSSSC-UNICEF Social Inclusion Cell, An Occasional Paper 2 (2011), 1--18.
[216]
Alice Xiang and Inioluwa Deborah Raji. 2019. On the Legal Compatibility of Fairness Definitions. arXiv preprint arXiv:1912.00761 (2019).
[217]
Bendert Zevenbergen. 2020. Internet Users as Vulnerable and at-Risk Human Subjects: Reviewing Research Ethics Law for Technical Internet Research. Ph.D. Dissertation. University of Oxford. Unpublished PhD thesis.
[218]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 335--340.
[219]
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457 (2017).
[220]
Ran Zmigrod, Sebastian J Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 1651--1661.

Cited By

View all

Index Terms

  1. Re-imagining Algorithmic Fairness in India and Beyond

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
    March 2021
    899 pages
    ISBN:9781450383097
    DOI:10.1145/3442188
    This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 March 2021

    Check for updates

    Author Tags

    1. India
    2. ability
    3. algorithmic fairness
    4. anti-caste politics
    5. caste
    6. class
    7. critical algorithmic studies
    8. decoloniality
    9. feminism
    10. gender
    11. religion

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    FAccT '21
    Sponsor:

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,404
    • Downloads (Last 6 weeks)134
    Reflects downloads up to 10 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media