RecoNet: An Interpretable Neural Architecture for Recommender Systems
RecoNet: An Interpretable Neural Architecture for Recommender Systems
Francesco Fusco, Michalis Vlachos, Vasileios Vasileiadis, Kathrin Wardatzky, Johannes Schneider
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2343-2349.
https://doi.org/10.24963/ijcai.2019/325
Neural systems offer high predictive accuracy but are plagued by long training times and low interpretability. We present a simple neural architecture for recommender systems that lifts several of these shortcomings. Firstly, the approach has a high predictive power that is comparable to state-of-the-art recommender approaches. Secondly, owing to its simplicity, the trained model can be interpreted easily because it provides the individual contribution of each input feature to the decision. Our method is three orders of magnitude faster than general-purpose explanatory approaches, such as LIME. Finally, thanks to its design, our architecture addresses cold-start issues, and therefore the model does not require retraining in the presence of new users.
Keywords:
Machine Learning: Data Mining
Machine Learning: Interpretability
Machine Learning: Recommender Systems
Machine Learning: Explainable Machine Learning