skip to main content
10.1007/978-3-030-77091-4_5guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Explainable Attentional Neural Recommendations for Personalized Social Learning

Published: 24 November 2020 Publication History

Abstract

Learning and training processes are starting to be affected by the diffusion of Artificial Intelligence (AI) techniques and methods. AI can be variously exploited for supporting education, though especially deep learning (DL) models are normally suffering from some degree of opacity and lack of interpretability. Explainable AI (XAI) is aimed at creating a set of new AI techniques able to improve their output or decisions with more transparency and interpretability. In the educational field it could be particularly significant and challenging to understand the reasons behind models outcomes, especially when it comes to suggestions to create, manage or evaluate courses or didactic resources. Deep attentional mechanisms proved to be particularly effective for identifying relevant communities and relationships in any given input network that can be exploited with the aim of improving useful information to interpret the suggested decision process. In this paper we provide the first stages of our ongoing research project, aimed at significantly empowering the recommender system of the educational platform “WhoTeach” by means of explainability, to help teachers or experts to create and manage high-quality courses for personalized learning.
The presented model is actually our first tentative to start to include explainability in the system. As shown, the model has strong potentialities to provide relevant recommendations. Moreover, it allows the possibility to implement effective techniques to completely reach explainability.

References

[1]
Holmes, R., Wayne, B., Author, M., Fadel, A.C.: Artificial intelligence in education. J (2019). Center for Curriculum Redesign, Boston.
[2]
Timms MJ Letting artificial intelligence in education out of the box: educational cobots and smart classrooms Int. J. Artif. Intell. Educ. 2016 26 2 701-712
[3]
Dondi R, Mauri G, and Zoppis I Czarnowski I, Caballero AM, Howlett RJ, and Jain LC Clique editing to support case versus control discrimination Intelligent Decision Technologies 2016 2016 Cham Springer 27-36
[4]
Zoppis, I., Dondi, R., Coppetti, D., Beltramo, A., Mauri, G.: Distributed heuristics for optimizing cohesive groups: a support for clinical patient engagement in social network analysis. In: 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP) (2018).
[5]
Zoppis, I., Manzoni, S., Mauri, G.: A computational model for promoting targeted communication and supplying social explainable recommendations. In: 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS) (2019).
[6]
Fox, M., Long, D., Magazzeni, D.: Explainable planning (2017)
[7]
Bonhard P and Sasse MA ‘Knowing me, knowing you’-using profiles and social networking to improve recommender systems BT Technol. J. 2006
[8]
Gupta, P., Goel, A., Lin, J., Sharma, A., Wang, D., Zadeh, R.: WTF: the who to follow service at Twitter. In: Proceedings of the 22nd International Conference on World Wide Web (2013).
[9]
Zhou X, Xu Y, Li Y, Josang A, and Cox C The state-of-the-art in personalized recommender systems for social networking Artif. Intell. Rev. 2012 37 119-132
[10]
Zhang, Y., Chen, X.: Explainable recommendation: a survey and new perspectives (2018).
[11]
Boaz Lee, J., Rossi, R.A., Kim, S., Ahmed, N.K., Koh, E.: Attention models in graphs: a survey. arXiv preprint arXiv:1807.07984 (2018).
[12]
Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[13]
Sharma, A., Cosley, D.: Do social explanations work? Studying and modeling the effects of social explanations in recommender systems. In: Proceedings of the 22nd International Conference on World Wide Web (2013).
[14]
Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, S., Turini, F.: Meaningful explanations of Black Box AI decision systems. In: Proceedings of the AAAI Conference on Artificial Intelligence (2019).
[15]
Adadi A and Berrada M Peeking inside the black-box: a survey on explainable artificial intelligence (XAI) IEEE Access 2018 6 52138-52160
[16]
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (2018).
[17]
Apolloni B, Bassis S, Mesiti M, Valtolina S, and Epifania F Bassis S, Esposito A, Morabito FC, and Pasero E A rule based recommender system Advances in Neural Networks 2016 Cham Springer 87-96
[18]
Park, H., Jeon, H., Kim, J., Ahn, B., Kang, U.: UniWalk: explainable and accurate recommendation for rating and network data (2017)
[19]
Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: Proceedings of the 2005 IEEE International Joint Conference on Neural Networks (2005).
[20]
Zoppis, I., Dondi, R., Manzoni, S., Mauri, G., Marconi, L., Epifania, F.: Optimized social explanation for educational platforms (2019).
[21]
Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model (2008).
[22]
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv 1409.0473 (2014)
[23]
Goyal, P., Ferrara, E.: Graph embedding techniques, applications, and performance: a survey. Knowl. Based Syst. (2018).
[24]
Mullenbach, J., Wiegreffe, S., Duke, J., Sun, J., Eisenstein, J.: Explainable prediction of medical codes from clinical text (2018)
[25]
Wang, N., Chen, M., Subbalakshmi, K.P.: Explainable CNN attention networks (C-attention network) for automated detection of Alzheimerś disease (2020)
[26]
Chen, C., Zhang, M., Liu, Y., Ma, S.: Neural attentional rating regression with review-level explanations. In: International World Wide Web Conferences Steering Committee (2018).
[27]
Mohankumar, A., Nema, P., Narasimhan, S., Khapra, M., Srinivasan, B., Ravindran, B.: Towards transparent and explainable attention models (2020).
[28]
Liu, P., Zhang, L., Gulla, J.A.: Dynamic attention-based explainable recommendation with textual and visual fusion. Inf. Process. Manag. (2019).
[29]
Zoppis, I., Manzoni, S., Mauri, G., Aragon, R.A.M., Marconi, L., Epifania, F.: Attentional neural mechanisms for social recommendations in educational platforms. In: Proceedings of the 12th International Conference on Computer Supported Education - Volume 1 CSEDU (2020).
[30]
Dondi R, Mauri G, and Zoppis I On the tractability of finding disjoint clubs in a network Theor. Comput. Sci. 2019 777 243-251
[31]
Chen, X., et al.: Personalized fashion recommendation with visual explanations based on multimodal attention network: towards visually explainable recommendation (2019).
[32]
Chen, J., Zhuang, F., Hong, X., Ao, X., Xie, X., He, Q.: Attention-driven factor model for explainable personalized recommendation (2018).
[33]
Liu, P., Zhang, L., Gulla, J.A.: Dynamic attention-based explainable recommendation with textual and visual fusion. Inf. Process. Manag. (2020).
[34]
Chen, X., Zhang, Y., Qin, Z.: Dynamic explainable recommendation based on neural attentive models. In: Proceedings of the AAAI Conference on Artificial Intelligence (2019).
[35]
Liu Y-Y, Yang B, Pei H-B, and Huang J Neural explainable recommender model based on attributes and reviews J. Comput. Sci. Technol. 2020 35 6 1446-1460
[36]
Zhang H, Huang T, Lv Z, Liu S, and Yang H MOOCRC: a highly accurate resource recommendation model for use in MOOC environments Mob. Netw. Appl. 2018 24 1 34-46
[37]
Chen, X., Zhang, Y., Qin, Z.: Dynamic explainable recommendation based on neural attentive models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 53–60 (2019).

Cited By

View all

Index Terms

  1. Explainable Attentional Neural Recommendations for Personalized Social Learning
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        AIxIA 2020 – Advances in Artificial Intelligence: XIXth International Conference of the Italian Association for Artificial Intelligence, Virtual Event, November 25–27, 2020, Revised Selected Papers
        Nov 2020
        466 pages
        ISBN:978-3-030-77090-7
        DOI:10.1007/978-3-030-77091-4
        • Editors:
        • Matteo Baldoni,
        • Stefania Bandini

        Publisher

        Springer-Verlag

        Berlin, Heidelberg

        Publication History

        Published: 24 November 2020

        Author Tags

        1. Explainable AI
        2. Personalized learning
        3. WhoTeach
        4. Social recommendations
        5. Graph attention networks

        Qualifiers

        • Article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 01 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media