skip to main content
10.1145/3498366.3505789acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

Generating and Validating Contextually Relevant Justifications for Conversational Recommendation

Published: 14 March 2022 Publication History

Abstract

Providing a justification or explanation for a recommendation has been shown to improve the users’ experience with recommender systems, in particular by increasing confidence in the recommendations. However, in order to be effective in a conversational setting, the justifications have to be appropriate for the conversation so far. Previous approaches rely on a user history of reviews and ratings of related items to personalize the recommendation, but this information is not generally available when conversing with a new user, and as such a cold-start problem imposes a challenge in generating suitable justifications. To address this problem, we propose and validate a new method, CONJURE (CONversational JUstificatons for REcommendations) to generate contextually relevant justifications for conversational recommendations. Specifically, we investigate whether the conversation itself can be used effectively to model the user, identify relevant review content from other users, and generate a justification that boosts the user’s confidence in and understanding of the recommendation. To implement CONJURE, we test several novel extensions to prior algorithms, by exploiting an auxiliary corpus of movie reviews to construct the justifications from extracted pieces of those reviews. In particular, we explore different conversation representations and ranking approaches. To evaluate CONJURE, we developed a pairwise crowd task to compare justifications. Our results show large, significant improvements in Efficiency and Transparency metrics over the previous non-contextualized template-based methods. We plan to release our code and an augmented conversation corpus on Github.

References

[1]
Krisztian Balog and Filip Radlinski. 2020. Measuring Recommendation Explanation Quality: The Conflicting Goals of Explanations. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’20).
[2]
Florian Boudin. 2016. PKE: an open source python-based keyphrase extraction toolkit. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations. Osaka, Japan, 69–73. http://aclweb.org/anthology/C16-2015
[3]
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175(2018).
[4]
Xu Chen, Zheng Qin, Yongfeng Zhang, and Tao Xu. 2016. Learning to Rank Features for Recommendation over Multiple Categories. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval (Pisa, Italy) (SIGIR ’16). Association for Computing Machinery, New York, NY, USA, 305–314. https://doi.org/10.1145/2911451.2911549
[5]
Felipe Costa, Sixun Ouyang, Peter Dolog, and Aonghus Lawlor. 2018. Automatic Generation of Natural Language Explanations. In Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion (Tokyo, Japan) (IUI ’18 Companion). Association for Computing Machinery, New York, NY, USA, Article 57, 2 pages. https://doi.org/10.1145/3180308.3180366
[6]
Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2013. How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies 72 (01 2013). https://doi.org/10.1016/j.ijhcs.2013.12.007
[7]
Peter Goos and Heiko Großmann. 2011. Optimal design of factorial paired comparison experiments in the presence of within-pair order effects. Food Quality and Preference 22, 2 (2011), 198–204. https://doi.org/10.1016/j.foodqual.2010.09.008
[8]
Xiangnan He, Tao Chen, Min-Yen Kan, and Xiao Chen. 2015. Trirank: Review-aware explainable recommendation by modeling aspects. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. 1661–1670.
[9]
Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining Collaborative Filtering Recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (Philadelphia, Pennsylvania, USA) (CSCW ’00). Association for Computing Machinery, New York, NY, USA, 241–250. https://doi.org/10.1145/358916.358995
[10]
Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, and Wai Lam. 2017. Neural rating regression with abstractive tips generation for recommendation. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval. 345–354.
[11]
William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text 8, 3 (1988), 243–281.
[12]
Julian McAuley and Jure Leskovec. 2013. Hidden Factors and Hidden Topics: Understanding Rating Dimensions with Review Text. In Proceedings of the 7th ACM Conference on Recommender Systems (Hong Kong, China) (RecSys ’13). Association for Computing Machinery, New York, NY, USA, 165–172. https://doi.org/10.1145/2507157.2507163
[13]
Seungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. 2019. Opendialkg: Explainable conversational reasoning with attention-based walks over knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 845–854.
[14]
Cataldo Musto, Gaetano Rossiello, Marco de Gemmis, Pasquale Lops, and Giovanni Semeraro. 2019. Combining Text Summarization and Aspect-Based Sentiment Analysis of Users’ Reviews to Justify Recommendations. In Proceedings of the 13th ACM Conference on Recommender Systems (Copenhagen, Denmark) (RecSys ’19). Association for Computing Machinery, New York, NY, USA, 383–387. https://doi.org/10.1145/3298689.3347024
[15]
Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 188–197.
[16]
Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, and Justine Cassell. 2019. A Model of Social Explanations for a Conversational Movie Recommendation System. In Proceedings of the 7th International Conference on Human-Agent Interaction (Kyoto, Japan) (HAI ’19). Association for Computing Machinery, New York, NY, USA, 135–143. https://doi.org/10.1145/3349537.3351899
[17]
Gustavo Penha and Claudia Hauff. 2020. What does BERT know about books, movies and music? Probing BERT for Conversational Recommendation. CoRR abs/2007.15356(2020). arXiv:2007.15356https://arxiv.org/abs/2007.15356
[18]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
[19]
Stephen E Robertson. 1977. The probability ranking principle in IR. Journal of documentation(1977).
[20]
Rashmi Sinha and Kirsten Swearingen. 2002. The Role of Transparency in Recommender Systems. In CHI ’02 Extended Abstracts on Human Factors in Computing Systems (Minneapolis, Minnesota, USA) (CHI EA ’02). Association for Computing Machinery, New York, NY, USA, 830–831. https://doi.org/10.1145/506443.506619
[21]
Nava Tintarev and Judith Masthoff. 2015. Explaining recommendations: Design and evaluation. In Recommender systems handbook. Springer, 353–382.
[22]
Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmentation. arXiv preprint arXiv:1808.09147(2018).
[23]
Ga Wu, Kai Luo, Scott Sanner, and Harold Soh. 2019. Deep language-based critiquing for recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems. 137–145.
[24]
Yao Wu and Martin Ester. 2015. FLAME: A Probabilistic Model Combining Aspect Based Opinion Mining and Collaborative Filtering. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining (Shanghai, China) (WSDM ’15). Association for Computing Machinery, New York, NY, USA, 199–208. https://doi.org/10.1145/2684822.2685291
[25]
L. Richard Ye and Paul E. Johnson. 1995. The Impact of Explanation Facilities on User Acceptance of Expert Systems Advice. MIS Quarterly 19, 2 (1995), 157–172. http://www.jstor.org/stable/249686
[26]
Yongfeng Zhang and Xu Chen. 2020. Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends® in Information Retrieval 14, 1 (2020), 1–101. https://doi.org/10.1561/1500000066
[27]
Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma. 2014. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. 83–92.
[28]
Yongfeng Zhang, Haochen Zhang, Min Zhang, Yiqun Liu, and Shaoping Ma. 2014. Do Users Rate or Review? Boost Phrase-Level Sentiment Labeling with Review-Level Sentiment Classification. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (Gold Coast, Queensland, Australia) (SIGIR ’14). Association for Computing Machinery, New York, NY, USA, 1027–1030. https://doi.org/10.1145/2600428.2609501

Cited By

View all

Index Terms

  1. Generating and Validating Contextually Relevant Justifications for Conversational Recommendation
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Conferences
            CHIIR '22: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval
            March 2022
            399 pages
            ISBN:9781450391863
            DOI:10.1145/3498366
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Sponsors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 14 March 2022

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. conversational recommendations
            2. explainable recommendations

            Qualifiers

            • Short-paper
            • Research
            • Refereed limited

            Conference

            CHIIR '22
            Sponsor:

            Acceptance Rates

            Overall Acceptance Rate 55 of 163 submissions, 34%

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)18
            • Downloads (Last 6 weeks)7
            Reflects downloads up to 10 Feb 2025

            Other Metrics

            Citations

            Cited By

            View all

            View Options

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Figures

            Tables

            Media

            Share

            Share

            Share this Publication link

            Share on social media