skip to main content
10.1145/3589334.3645403acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Uplift Modeling for Target User Attacks on Recommender Systems

Published: 13 May 2024 Publication History

Abstract

Recommender systems are vulnerable to injective attacks, which inject limited fake users into the platforms to manipulate the exposure of target items to all users. In this work, we identify that conventional injective attackers overlook the fact that each item has its unique potential audience, and meanwhile, the attack difficulty across different users varies. Blindly attacking all users will result in a waste of fake user budgets and inferior attack performance. To address these issues, we focus on an under-explored attack task called target user attacks, aiming at promoting target items to a particular user group. In addition, we formulate the varying attack difficulty as heterogeneous treatment effects through a causal lens and propose an Uplift-guided Budget Allocation (UBA) framework. UBA estimates the treatment effect on each target user and optimizes the allocation of fake user budgets to maximize the attack performance. Theoretical and empirical analysis demonstrates the rationality of treatment effect estimation methods of UBA. By instantiating UBA on multiple attackers, we conduct extensive experiments on three datasets under various settings with different target items, target users, fake user budgets, victim models, and defense models, validating the effectiveness and robustness of UBA.

Supplemental Material

MOV File
Supplemental video
MP4 File
Presentation video

References

[1]
Meng Ai, Biao Li, Heyang Gong, Qingwei Yu, Shengjie Xue, Yuan Zhang, Yunzhou Zhang, and Peng Jiang. Lbcf: A large-scale budget-constrained causal forest algorithm. In WWW, pages 2310--2319. ACM, 2022.
[2]
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Doina Precup and Yee Whye Teh, editors, ICML, pages 214--223. PMLR, 2017.
[3]
Nabiha Asghar. Yelp dataset challenge: Review rating prediction. arXiv preprint arXiv:1605.05362, 2016.
[4]
Keqin Bao, Jizhi Zhang, Wenjie Wang, Yang Zhang, Zhengyi Yang, Yancheng Luo, Fuli Feng, Xiangnaan He, and Qi Tian. A bi-step grounding paradigm for large language models in recommendation systems. arXiv:2308.08434, 2023.
[5]
Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In RecSys. ACM, 2023.
[6]
R. Burke, B. Mobasher, R. Bhaumik, and C. Williams. Segment-based injection attacks against collaborative filtering recommender systems. In ICDM, page 4. IEEE, 2005.
[7]
Robin Burke, Bamshad Mobasher, and Runa Bhaumik. Limited knowledge shilling attacks in collaborative filtering systems. In IJCAI, pages 17--24. ACM, 2005.
[8]
Yuanjiang Cao, Xiaocong Chen, Lina Yao, Xianzhi Wang, and Wei Emma Zhang. Adversarial attacks and detection on reinforcement learning-based interactive recommender systems. In SIGIR, pages 1669--1672. ACM, 2020.
[9]
Yashar Deldjoo, Tommaso Di Noia, and Felice Antonio Merra. A survey on adversarial recommender systems: from attack/defense strategies to generative adversarial networks. CSUR, 54(2):1--38, 2021.
[10]
Boyi Deng, Wenjie Wang, Fuli Feng, Yang Deng, Qifan Wang, and Xiangnan He. Attack prompt generation for red teaming and defending large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2176--2189. ACL, 2023.
[11]
Wenqi Fan, Tyler Derr, Xiangyu Zhao, Yao Ma, Hui Liu, Jianping Wang, Jiliang Tang, and Qing Li. Attacking black-box recommendations via copying crossdomain user profiles. In ICDE, pages 1583--1594. IEEE, 2021.
[12]
Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. Poisoning attacks to graph-based recommender systems. In ACSAC, pages 381--392. ACM, 2018.
[13]
Minghong Fang, Neil Zhenqiang Gong, and Jia Liu. Influence function based data poisoning attacks to top-n recommender systems. In WWW, pages 3019--3025. ACM, 2020.
[14]
Pierre Gutierrez and Jean-Yves Gérardy. Causal inference and uplift modelling: A review of the literature. In PMLR, pages 1--13, 2017.
[15]
F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. ACM, 5(4), 2015.
[16]
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In WWW, pages 173--182. ACM, 2017.
[17]
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In SIGIR, pages 639--648. ACM, 2020.
[18]
Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, and Mingwei Xu. Data poisoning attacks to deep learning based recommender systems. arXiv:2101.02644, 2021.
[19]
Parneet Kaur and Shivani Goel. Shilling attack models in recommender system. In ICICT, pages 1--5. IEEE, 2016.
[20]
Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30--37, 2009.
[21]
Shyong K Lam and John Riedl. Shilling recommender systems for fun and profit. In WWW, pages 393--402. ACM, 2004.
[22]
Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. Data poisoning attacks on factorization-based collaborative filtering. NeurIPS, 2016.
[23]
Haoyang Li, Shimin Di, and Lei Chen. Revisiting injective attacks on recommender systems. In NeurIPS, 2022.
[24]
Haoyang Li, Shimin Di, Zijian Li, Lei Chen, and Jiannong Cao. Black-box adversarial attack and defense on graph neural networks. In ICDE, pages 1017--1030. IEEE, 2022.
[25]
Chen Lin, Si Chen, Hui Li, Yanghua Xiao, Lianyun Li, and Qian Yang. Attacking recommender systems with augmented user profiles. In CIKM, pages 855--864. ACM, 2020.
[26]
Chen Lin, Si Chen, Meifang Zeng, Sheng Zhang, Min Gao, and Hui Li. Shilling black-box recommender systems by learning to generate fake user profiles. TNNLS, pages 1--15, 2022.
[27]
G. Linden, B. Smith, and J. York. Amazon.com recommendations: item-to-item collaborative filtering. IC, 7(1):76--80, 2003.
[28]
Zihan Liu, Yun Luo, Lirong Wu, Zicheng Liu, and Stan Z. Li. Towards reasonable budget allocation in untargeted graph structure attacks via gradient debias. In NeurIPS, 2022.
[29]
Bhaskar Mehta andWolfgang Nejdl. Unsupervised strategies for shilling detection and robust collaborative filtering. UMUAI, 19:65--97, 2009.
[30]
Bamshad Mobasher, Robin Burke, Runa Bhaumik, and Chad Williams. Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness. TOIT, 7(4):23--es, 2007.
[31]
Toan Nguyen Thanh, Nguyen Duc Khang Quach, Thanh Tam Nguyen, Thanh Trung Huynh, Viet Hung Vu, Phi Le Nguyen, Jun Jo, and Quoc Viet Hung Nguyen. Poisoning gnn-based recommender systems with generative surrogatebased attacks. ACM, 41(3), feb 2023.
[32]
Ming Pang,Wei Gao, Min Tao, and Zhi-Hua Zhou. Unorganized malicious attacks detection. NeurIPS, 2018.
[33]
Bibek Paudel, Fabian Christoffel, Chris Newell, and Abraham Bernstein. Updatable, accurate, diverse, and scalable recommendations for interactive applications. ACM, 7(1), dec 2016.
[34]
Jérémie Rappaz, Maria-Luiza Vladarean, Julian McAuley, and Michele Catasta. Bartering books to beers: A recommender system for exchange platforms. In WSDM, pages 505--514. ACM, 2017.
[35]
Lalita Sharma and Anju Gera. A survey of recommendation system: Research challenges. IJETT, 4(5):1989--1992, 2013.
[36]
Junshuai Song, Zhao Li, Zehong Hu, Yucheng Wu, Zhenpeng Li, Jian Li, and Jun Gao. Poisonrec: an adaptive data poisoning framework for attacking black-box recommender systems. In ICDE, pages 157--168. IEEE, 2020.
[37]
Jiaxi Tang, Hongyi Wen, and Ke Wang. Revisiting adversarially learned injection attacks against recommender systems. In RecSys, pages 318--327. ACM, 2020.
[38]
Ye Tu, Kinjal Basu, Cyrus DiCiccio, Romil Bansal, Preetam Nandy, Padmini Jaikumar, and Shaunak Chatterjee. Personalized treatment selection using causal heterogeneity. In WWW, pages 1574--1585. ACM, 2021.
[39]
David C Wilson and Carlos E Seminario. When power users attack: assessing impacts in collaborative recommender systems. In RecSys, pages 427--430, 2013.
[40]
Chenwang Wu, Defu Lian, Yong Ge, Zhihao Zhu, and Enhong Chen. Triple adversarial learning for influence based poisoning attack in recommender systems. In KDD, pages 1830--1840. ACM, 2021.
[41]
Chenwang Wu, Defu Lian, Yong Ge, Zhihao Zhu, Enhong Chen, and Senchao Yuan. Fight fire with fire: towards robust recommender systems via adversarial poisoning training. In SIGIR, pages 1074--1083. ACM, 2021.
[42]
Peng Wu, Haoxuan Li, Yuhao Deng, Wenjie Hu, Quanyu Dai, Zhenhua Dong, Jie Sun, Rui Zhang, and Xiao-Hua Zhou. On the opportunity of causal learning in recommendation systems: Foundation, estimation, prediction and challenges. In IJCAI, pages 23--29, 2022.
[43]
Xu Xie, Zhaoyang Liu, Shiwen Wu, Fei Sun, Cihang Liu, Jiawei Chen, Jinyang Gao, Bin Cui, and Bolin Ding. Causcf: Causal collaborative filtering for recommendation effect estimation. In CIKM, page 4253--4263. ACM, 2021.
[44]
Xingyu Xing, Wei Meng, Dan Doozan, Alex C. Snoeren, Nick Feamster, and Wenke Lee. Take this personally: Pollution attacks on personalized services. In USENIX, pages 671--686. USENIX, 2013.
[45]
Xinyu Xing,Wei Meng, Dan Doozan, Alex C Snoeren, Nick Feamster, andWenke Lee. Take this personally: Pollution attacks on personalized services. In USENIX, pages 671--686, 2013.
[46]
Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. Fake co-visitation injection attacks to recommender systems. In NDSS, 2017.
[47]
Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, and Aidong Zhang. A survey on causal inference. ACM, may 2021.
[48]
Nan Yin, Fuli Feng, Zhigang Luo, Xiang Zhang, Wenjie Wang, Xiao Luo, Chong Chen, and Xian-Sheng Hua. Dynamic hypergraph convolutional network. In ICDE, pages 1621--1634. IEEE, 2022.
[49]
Sizhe Yu, Ziyi Liu, ShixiangWan, Jia Zheng, Zang Li, and Fan Zhou. Mdp2 forest: A constrained continuous multi-dimensional policy optimization approach for short-video recommendation. In KDD, pages 2388--2398. ACM, 2022.
[50]
Jerrold H Zar. Significance testing of the Spearman rank correlation coefficient. JASA, 67(339):578--580, 1972.
[51]
Hengtong Zhang, Yaliang Li, Bolin Ding, and Jing Gao. Practical data poisoning attack against next-item recommendation. In WWW, pages 2458--2464. ACM, 2020.
[52]
Hengtong Zhang, Changxin Tian, Yaliang Li, Lu Su, Nan Yang, Wayne Xin Zhao, and Jing Gao. Data poisoning attack against recommender system using incomplete and perturbed data. In KDD, pages 2154--2164. ACM, 2021.
[53]
Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. Adversarial attacks on deep-learning models in natural language processing: A survey. TIST, 11(3):1--41, 2020.
[54]
Weijia Zhang, Jiuyong Li, and Lin Liu. A unified survey of treatment effect heterogeneity modelling and uplift modelling. ACM Comput. Surv., 54(8), 2021.
[55]
Xudong Zhang, Zan Wang, Jingke Zhao, and Lanjun Wang. Targeted data poisoning attack on news recommendation system. In arXiv:2203.03560, 2022.
[56]
Yongfeng Zhang, Yunzhi Tan, Min Zhang, Yiqun Liu, Tat-Seng Chua, and Shaoping Ma. Catch the black sheep: unified framework for shilling attack detection based on fraudulent action propagation. In IJCAI. ACM, 2015.
[57]
Tianqing Zhu, Gang Li, Yongli Ren, Wanlei Zhou, and Ping Xiong. Differential privacy for neighborhood-based collaborative filtering. In ASONAM, pages 752--759, 2013.

Index Terms

  1. Uplift Modeling for Target User Attacks on Recommender Systems

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WWW '24: Proceedings of the ACM Web Conference 2024
    May 2024
    4826 pages
    ISBN:9798400701719
    DOI:10.1145/3589334
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 May 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. recommender attack
    2. target user attack
    3. uplift modeling

    Qualifiers

    • Research-article

    Funding Sources

    • the CCCD Key Lab of Ministry of Culture and Tourism
    • the National Key Research and Development Program of China
    • the National Natural Science Foundation of China

    Conference

    WWW '24
    Sponsor:
    WWW '24: The ACM Web Conference 2024
    May 13 - 17, 2024
    Singapore, Singapore

    Acceptance Rates

    Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 154
      Total Downloads
    • Downloads (Last 12 months)154
    • Downloads (Last 6 weeks)28
    Reflects downloads up to 22 Dec 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media