skip to main content
10.1145/3447548.3467249acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Open access

Deconfounded Recommendation for Alleviating Bias Amplification

Published: 14 August 2021 Publication History

Abstract

Recommender systems usually amplify the biases in the data. The model learned from historical interactions with imbalanced item distribution will amplify the imbalance by over-recommending items from the majority groups. Addressing this issue is essential for a healthy ecosystem of recommendation in the long run. Existing work applies bias control to the ranking targets (e.g., calibration, fairness, and diversity), but ignores the true reason for bias amplification and trades off the recommendation accuracy.
In this work, we scrutinize the cause-effect factors for bias amplification, identifying the main reason lies in the confounding effect of imbalanced item distribution on user representation and prediction score. The existence of such confounder pushes us to go beyond merely modeling the conditional probability and embrace the causal modeling for recommendation. Towards this end, we propose a Deconfounded Recommender System (DecRS), which models the causal effect of user representation on the prediction score. The key to eliminating the impact of the confounder lies in backdoor adjustment, which is however difficult to do due to the infinite sample space of the confounder. For this challenge, we contribute an approximation operator for backdoor adjustment which can be easily plugged into most recommender models. Lastly, we devise an inference strategy to dynamically regulate backdoor adjustment according to user status. We instantiate DecRS on two representative models FM [32] and NFM [16], and conduct extensive experiments over two benchmarks to validate the superiority of our proposed DecRS.

References

[1]
Shoshana Abramovich and Lars-Erik Persson. 2016. Some new estimates of the 'Jensen gap'. Journal of Inequalities and Applications, Vol. 2016, 1 (2016), 1--9.
[2]
Qingyao Ai, Keping Bi, Cheng Luo, Jiafeng Guo, and W. Bruce Croft. 2018. Unbiased Learning to Rank with Unbiased Propensity Estimation. In SIGIR. ACM, 385--394.
[3]
Asia J Biega, Krishna P Gummadi, and Gerhard Weikum. 2018. Equity of attention: Amortizing individual fairness in rankings. In SIGIR. ACM, 405--414.
[4]
Stephen Bonner and Flavian Vasile. 2018. Causal embeddings for recommendation. In RecSys. ACM, 104--112.
[5]
Robin Burke. 2017. Multisided fairness for recommendation. In FAT ML.
[6]
Praveen Chandar and Ben Carterette. 2013. Preference based evaluation measures for novelty and diversity. In SIGIR. ACM, 413--422.
[7]
Allison JB Chaney, Brandon M Stewart, and Barbara E Engelhardt. 2018. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In RecSys. ACM, 224--232.
[8]
Peizhe Cheng, Shuaiqiang Wang, Jun Ma, Jiankai Sun, and Hui Xiong. 2017. Learning to Recommend Accurate and Diverse Items. WWW. IW3C2, 183--192.
[9]
John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, Vol. 12, 7 (2011).
[10]
Fuli Feng, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. 2021 a. Cross-GCN: Enhancing Graph Convolutional Network with k-Order Feature Interactions. TKDE (2021).
[11]
Fuli Feng, Weiran Huang, Xin Xin, Xiangnan He, and Tat-Seng Chua. 2021 b. Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method. In SIGIR. ACM.
[12]
Fuli Feng, Jizhi Zhang, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. 2021 c. Empowering Language Understanding with Counterfactual Reasoning. In ACL-IJCNLP Findings. ACL.
[13]
Xiang Gao, Meera Sitharam, and Adrian E. Roitberg. 2019. Bounds on the Jensen Gap, and Implications for Mean-Concentrated Distributions. AJMAA, Vol. 16, 14 (2019), 1--16. Issue 2.
[14]
Yingqiang Ge, Shuya Zhao, Honglu Zhou, Changhua Pei, Fei Sun, Wenwu Ou, and Yongfeng Zhang. 2020. Understanding Echo Chambers in E-Commerce Recommender Systems. In SIGIR. ACM, 2261--2270.
[15]
Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. 2016. The case for process fairness in learning: Feature selection for fair decision making. In NeuIPS.
[16]
Xiangnan He and Tat-Seng Chua. 2017. Neural factorization machines for sparse predictive analytics. In SIGIR. ACM, 355--364.
[17]
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural Collaborative Filtering. In WWW. ACM, 173--182.
[18]
Hao Jiang, Wenjie Wang, Yinwei Wei, Zan Gao, Yinglong Wang, and Liqiang Nie. 2020. What Aspect Do You Like: Multi-Scale Time-Aware User Interest Modeling for Micro-Video Recommendation. In MM. ACM, 3487--3495.
[19]
Thorsten Joachims, Adith Swaminathan, and Tobias Schnabel. 2017. Unbiased Learning-to-Rank with Biased Feedback. In WSDM. ACM, 781--789.
[20]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fairness. In NeuIPS. Curran Associates, Inc., 4066--4076.
[21]
Rishabh Mehrotra, James McInerney, Hugues Bouchard, Mounia Lalmas, and Fernando Diaz. 2018. Towards a fair marketplace: Counterfactual evaluation of the trade-off between relevance, fairness and satisfaction in recommendation systems. In CIKM. ACM, 2243--2251.
[22]
Marco Morik, Ashudeep Singh, Jessica Hong, and Thorsten Joachims. 2020. Controlling Fairness and Bias in Dynamic Learning-to-Rank. In SIGIR. ACM, 429--438.
[23]
Tien T Nguyen, Pik-Mai Hui, F Maxwell Harper, Loren Terveen, and Joseph A Konstan. 2014. Exploring the filter bubble: the effect of using recommender systems on content diversity. In WWW. ACM, 677--686.
[24]
Liqiang Nie, Yongqi Li, Fuli Feng, Xuemeng Song, Meng Wang, and Yinglong Wang. 2020. Large-Scale Question Tagging via Joint Question-Topic Embedding Learning. TOIS, Vol. 38 (2020).
[25]
Liqiang Nie, Meng Liu, and Xuemeng Song. 2019. Multimodal learning toward micro-video understanding. Synthesis Lectures on Image, Video, and Multimedia Processing, Vol. 9, 4 (2019), 1--186.
[26]
Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. 2021. Counterfactual VQA: A Cause-Effect Look at Language Bias. In CVPR. IEEE.
[27]
Gourab K Patro, Arpita Biswas, Niloy Ganguly, Krishna P Gummadi, and Abhijnan Chakraborty. 2020. Fairrec: Two-sided fairness for personalized recommendations in two-sided platforms. In WWW. ACM, 1194--1204.
[28]
Judea Pearl. 2009. Causality. Cambridge university press.
[29]
Judea Pearl and Dana Mackenzie. 2018. The Book of Why: The New Science of Cause and Effect 1st ed.). Basic Books, Inc.
[30]
Evaggelia Pitoura, Georgia Koutrika, and Kostas Stefanidis. 2020. Fairness in Rankings and Recommenders. In EDBT. ACM, 651--654.
[31]
Zhen Qin, Suming J. Chen, Donald Metzler, Yongwoo Noh, Jingzheng Qin, and Xuanhui Wang. 2020. Attribute-Based Propensity for Unbiased Learning in Recommender Systems: Algorithm and Case Studies. In KDD. ACM, 2359--2367.
[32]
Steffen Rendle. 2010. Factorization machines. In ICDM. IEEE, 995--1000.
[33]
Yuta Saito, Suguru Yaginuma, Yuta Nishino, Hayato Sakata, and Kazuhide Nakata. 2020. Unbiased Recommender Learning from Missing-Not-At-Random Implicit Feedback. In WSDM. ACM, 501--509.
[34]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of exposure in rankings. In KDD. ACM, 2219--2228.
[35]
Harald Steck. 2018. Calibrated recommendations. In RecSys. ACM, 154--162.
[36]
Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, and Mark Coates. 2020. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD. ACM, 2030--2039.
[37]
Kaihua Tang, Jianqiang Huang, and Hanwang Zhang. 2020. Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect. In NeuIPS.
[38]
Tan Wang, Jianqiang Huang, Hanwang Zhang, and Qianru Sun. 2020 a. Visual commonsense r-cnn. In CVPR. IEEE, 10760--10770.
[39]
Wenjie Wang, Fuli Feng, Xiangnan He, Liqiang Nie, and Tat-Seng Chua. 2021 a. Denoising implicit feedback for recommendation. In WSDM. ACM, 373--381.
[40]
Wenjie Wang, Fuli Feng, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. 2021 b. Click can be Cheating: Counterfactual Recommendation for Mitigating Clickbait Issue. In SIGIR. ACM.
[41]
Wenjie Wang, Minlie Huang, Xin-Shun Xu, Fumin Shen, and Liqiang Nie. 2018a. Chat more: Deepening and widening the chatting topic via a deep model. In SIGIR. ACM, 255--264.
[42]
Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural Graph Collaborative Filtering. In SIGIR. ACM, 165--174.
[43]
Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, and Tat-Seng Chua. 2020 b. Disentangled Graph Collaborative Filtering. In SIGIR. ACM, 1001--1010.
[44]
Yixin Wang, Dawen Liang, Laurent Charlin, and David M Blei. 2018b. The deconfounded recommender: A causal inference approach to recommendation. In arXiv:1808.06581.
[45]
Yinwei Wei, Xiang Wang, Liqiang Nie, Xiangnan He, Richang Hong, and Tat-Seng Chua. 2019. textMMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video. In MM. ACM, 1437--1445.
[46]
Ke Yang and Julia Stoyanovich. 2017. Measuring fairness in ranked outputs. In SSDBM. ACM, 1--6.
[47]
Xun Yang, Fuli Feng, Wei Ji, Meng Wang, and Tat-Seng Chua. 2021. Deconfounded Video Moment Retrieval with Causal Intervention. In SIGIR. ACM.
[48]
Yang Zhang, Fuli Feng, Xiangnan He, Tianxin Wei, Chonggang Song, Guohui Ling, and Yongdong Zhang. 2021. Causal Intervention for Leveraging Popularity Bias in Recommendation. In SIGIR. ACM.
[49]
Cai-Nicolas Ziegler, Sean M McNee, Joseph A Konstan, and Georg Lausen. 2005. Improving recommendation lists through topic diversification. In WWW. ACM, 22--32.
[50]
Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, and Yue He. 2020. Counterfactual Prediction for Bundle Treatment. NeuIPS (2020).

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
August 2021
4259 pages
ISBN:9781450383325
DOI:10.1145/3447548
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 August 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. bias amplification
  2. deconfounded recommendation
  3. user interest imbalance

Qualifiers

  • Research-article

Funding Sources

Conference

KDD '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Upcoming Conference

KDD '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)692
  • Downloads (Last 6 weeks)83
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media