skip to main content
10.1145/2959100.2959185acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article
Public Access

Local Item-Item Models For Top-N Recommendation

Published: 07 September 2016 Publication History

Abstract

Item-based approaches based on SLIM (Sparse LInear Methods) have demonstrated very good performance for top-N recommendation; however they only estimate a single model for all the users. This work is based on the intuition that not all users behave in the same way -- instead there exist subsets of like-minded users. By using different item-item models for these user subsets, we can capture differences in their preferences and this can lead to improved performance for top-N recommendations. In this work, we extend SLIM by combining global and local SLIM models. We present a method that computes the prediction scores as a user-specific combination of the predictions derived by a global and local item-item models. We present an approach in which the global model, the local models, their user-specific combination, and the assignment of users to the local models are jointly optimized to improve the top-N recommendation performance. Our experiments show that the proposed method improves upon the standard SLIM model and outperforms competing top-N recommendation approaches.

Supplementary Material

MP4 File (p67.mp4)

References

[1]
Cluto clustering toolkit. http://glaros.dtc.umn.edu/gkhome/cluto/cluto/overview.
[2]
Flixster dataset. http://http://www.cs.sfu.ca/~sja25/personal/datasets/.
[3]
G. Adomavicius and A. Tuzhilin. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. Knowledge and Data Engineering, IEEE Transactions on, 17(6):734--749, 2005.
[4]
F. Aiolli. A preliminary study on a recommender system for the million songs dataset challenge. PREFERENCE LEARNING: PROBLEMS AND APPLICATIONS IN AI, page~1, 2012.
[5]
J. Bennett and S. Lanning. The netflix prize. In Proceedings of KDD cup and workshop, volume 2007, page~35, 2007.
[6]
M. Connor and J. Herlocker. Clustering items for collaborative filtering. In Proceedings of the ACM SIGIR workshop on recommender systems, volume 128. Citeseer, 1999.
[7]
P. Cremonesi, Y. Koren, and R. Turrin. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems, pages 39--46. ACM, 2010.
[8]
M. Deshpande and G. Karypis. Item-based top-n recommendation algorithms. ACM Transactions on Information Systems (TOIS), 22(1):143--177, 2004.
[9]
J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of statistical software, 33(1):1, 2010.
[10]
K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative filtering algorithm. Information Retrieval, 4(2):133--151, 2001.
[11]
G. Guo, J. Zhang, Z. Sun, and N. Yorke-Smith. Librec: A java library for recommender systems. In Posters, Demos, Late-breaking Results and Workshop Proceedings of the 23rd International Conference on User Modeling, Adaptation and Personalization, 2015.
[12]
J. L. Herlocker, J. A. Konstan, A. Borchers, and J. Riedl. An algorithmic framework for performing collaborative filtering. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pages 230--237. ACM, 1999.
[13]
S. Kabbur, X. Ning, and G. Karypis. Fism: Factored item similarity models for top-n recommender systems. 2013.
[14]
J. Lee, S. Bengio, S. Kim, G. Lebanon, and Y. Singer. Local collaborative ranking. In Proceedings of the 23rd international conference on World wide web, pages 85--96. ACM, 2014.
[15]
J. Lee, S. Kim, G. Lebanon, and Y. Singer. Local low-rank matrix approximation. In Proceedings of The 30th International Conference on Machine Learning, pages 82--90, 2013.
[16]
X. Ning and G. Karypis. Slim: Sparse linear methods for top-n recommender systems. In Data Mining (ICDM), 2011 IEEE 11th International Conference on, pages 497--506. IEEE, 2011.
[17]
S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 452--461. AUAI Press, 2009.
[18]
F. Ricci and B. Shapira. Recommender systems handbook. Springer, 2011.
[19]
B. Xu, J. Bu, C. Chen, and D. Cai. An exploration of improving collaborative recommender systems via user-item subgroups. In Proceedings of the 21st international conference on World Wide Web, pages 21--30. ACM, 2012.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
RecSys '16: Proceedings of the 10th ACM Conference on Recommender Systems
September 2016
490 pages
ISBN:9781450340359
DOI:10.1145/2959100
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 September 2016

Permissions

Request permissions for this article.

Check for updates

Badges

  • Best Paper

Author Tags

  1. collaborative filtering
  2. local models
  3. slim
  4. top-n recommendation

Qualifiers

  • Research-article

Funding Sources

Conference

RecSys '16
Sponsor:
RecSys '16: Tenth ACM Conference on Recommender Systems
September 15 - 19, 2016
Massachusetts, Boston, USA

Acceptance Rates

RecSys '16 Paper Acceptance Rate 29 of 159 submissions, 18%;
Overall Acceptance Rate 254 of 1,295 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)197
  • Downloads (Last 6 weeks)38
Reflects downloads up to 29 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media