skip to main content
10.1145/3545945.3569824acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

G is for Generalisation: Predicting Student Success from Keystrokes

Published: 03 March 2023 Publication History

Abstract

Student performance prediction aims to build models to help educators identify struggling students so they can be better supported. However, prior work in the space frequently evaluates features and models on data collected from a single semester, of a single course, taught at a single university. Without evaluating these methods in a broader context there is an open question of whether or not performance prediction methods are capable of generalising to new data. We test three methods for evaluating student performance models on data from introductory programming courses from two universities with a total of 3,323 students. Our results suggest that using cross-validation on one semester is insufficient for gauging model performance in the real world. Instead, we suggest that where possible future work in student performance prediction collects data from multiple semesters and uses one or more as a distinct hold-out set. Failing this, bootstrapped cross-validation should be used to improve confidence in models' performance. By recommending stronger methods for evaluating performance prediction models, we hope to bring them closer to practical use and assist teachers to understand struggling students in novice programming courses.

Supplementary Material

MP4 File (SIGCSE23-V1fp331.mp4)
Student performance prediction aims to build models to help educators identify struggling students so they can be better supported. However, prior work in the space frequently evaluates features and models on data collected from a single semester, of a single course, taught at a single university. Without evaluating these methods in a broader context there is an open question of whether or not performance prediction methods are capable of generalising to new data.

References

[1]
Alireza Ahadi, Raymond Lister, Heikki Haapala, and Arto Vihavainen. 2015. Exploring Machine Learning Methods to Automatically Identify Students in Need of Assistance. In Proceedings of the eleventh annual International Conference on International Computing Education Research (ICER '15). Association for Computing Machinery, New York, NY, USA, 121--130. https://doi.org/10.1145/2787622.2787717
[2]
Balqis Albreiki, Nazar Zaki, and Hany Alashwal. 2021. A Systematic Literature Review of Student' Performance Prediction Using Machine Learning Techniques. Education Sciences, Vol. 11, 9 (Sept. 2021), 552. https://doi.org/10.3390/educsci11090552 Number: 9 Publisher: Multidisciplinary Digital Publishing Institute.
[3]
Kai Arakawa, Qiang Hao, Wesley Deneke, Indie Cowan, Steven Wolfman, and Abigayle Peterson. 2022. Early Identification of Student Struggles at the Topic Level Using Context-Agnostic Features. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2022). Association for Computing Machinery, New York, NY, USA, 147--153. https://doi.org/10.1145/3478431.3499298
[4]
Adam Carter, Christopher Hundhausen, and Daniel Olivares. 2019. Leveraging the Integrated Development Environment for Learning Analytics. In The Cambridge Handbook of Computing Education Research, Anthony V. Robins and Sally A. Fincher (Eds.). Cambridge University Press, Cambridge, 679--706. https://doi.org/10.1017/9781108654555.024
[5]
Adam S. Carter, Christopher D. Hundhausen, and Olusola Adesope. 2015. The Normalized Programming State Model: Predicting Student Performance in Computing Courses Based on Programming Behavior. In Proceedings of the eleventh annual International Conference on International Computing Education Research. ACM, Omaha Nebraska USA, 141--150. https://doi.org/10.1145/2787622.2787710
[6]
Karo Castro-Wunsch, Alireza Ahadi, and Andrew Petersen. 2017. Evaluating Neural Networks as a Method for Identifying Students in Need of Assistance. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE '17). Association for Computing Machinery, New York, NY, USA, 111--116. https://doi.org/10.1145/3017680.3017792
[7]
John Edwards, Joseph Ditton, Bishal Sainju, and Joshua Dawson. 2020a. Different assignments as different contexts: predictors across assignments and outcome measures in CS1. In 2020 Intermountain Engineering, Technology and Computing (IETC). IEEE, Williamsburg, VA, USA, 1--6. https://doi.org/10.1109/IETC47856.2020.9249217
[8]
John Edwards, Juho Leinonen, and Arto Hellas. 2020b. A Study of Keystroke Data in Two Contexts: Written Language and Programming Language Influence Predictability of Learning Outcomes. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. Association for Computing Machinery, New York, NY, USA, 413--419. http://doi.org/10.1145/3328778.3366863
[9]
Stephen H. Edwards, Jason Snyder, Manuel A. Pérez-Quiñones, Anthony Allevato, Dongkwan Kim, and Betsy Tretola. 2009. Comparing effective and ineffective behaviors of student programmers. In Proceedings of the fifth international workshop on Computing education research workshop (ICER '09). Association for Computing Machinery, New York, NY, USA, 3--14. https://doi.org/10.1145/1584322.1584325
[10]
Arto Hellas, Petri Ihantola, Andrew Petersen, Vangel V. Ajanovski, Mirela Gutica, Timo Hynninen, Antti Knutas, Juho Leinonen, Chris Messom, and Soohyun Nam Liao. 2018. Predicting academic performance: a systematic literature review. In Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE 2018 Companion). Association for Computing Machinery, New York, NY, USA, 175--199. https://doi.org/10.1145/3293881.3295783
[11]
Matthew C. Jadud. 2006. Methods and tools for exploring novice compilation behaviour. In Proceedings of the second international workshop on Computing education research (ICER '06). Association for Computing Machinery, New York, NY, USA, 73--84. https://doi.org/10.1145/1151588.1151600
[12]
Ji-Hyun Kim. 2009. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Computational Statistics & Data Analysis, Vol. 53, 11 (Sept. 2009), 3735--3745. https://doi.org/10.1016/j.csda.2009.04.009
[13]
Charles Koutcheme, Sami Sarsa, Arto Hellas, Lassi Haaranen, and Juho Leinonen. 2022. Methodological Considerations for Predicting At-risk Students. In Australasian Computing Education Conference (ACE '22). Association for Computing Machinery, New York, NY, USA, 105--113. https://doi.org/10.1145/3511861.3511873
[14]
Juho Leinonen. 2022. Open IDE Action Log Dataset from a CS1 MOOC. In Proceedings of the 6th Educational Data Mining in Computer Science Education (CSEDM) Workshop. Zenodo, Virtual, 4 pages. https://doi.org/10.5281/zenodo.6983459
[15]
Juho Leinonen, Francisco Enrique Vicente Castro, and Arto Hellas. 2021. Does the Early Bird Catch the Worm? Earliness of Students' Work and its Relationship with Course Outcomes. In Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1. Association for Computing Machinery, New York, NY, USA, 373--379. http://doi.org/10.1145/3430665.3456383
[16]
Juho Leinonen, Francisco Enrique Vicente Castro, and Arto Hellas. 2022. Time-on-Task Metrics for Predicting Performance. In Proceedings of the 53rd ACM Technical Symposium V. 1 on Computer Science Education (SIGCSE 2022). Association for Computing Machinery, New York, NY, USA, 871--877. https://doi.org/10.1145/3478431.3499359
[17]
Juho Leinonen, Leo Leppänen, Petri Ihantola, and Arto Hellas. 2017. Comparison of Time Metrics in Programming. In Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER '17). Association for Computing Machinery, New York, NY, USA, 200--208. https://doi.org/10.1145/3105726.3106181
[18]
Juho Leinonen, Krista Longi, Arto Klami, and Arto Vihavainen. 2016. Automatic Inference of Programming Performance and Experience from Typing Patterns. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE '16). Association for Computing Machinery, New York, NY, USA, 132--137. https://doi.org/10.1145/2839509.2844612
[19]
Filipe Dwan Pereira, Samuel C. Fonseca, Elaine H. T. Oliveira, David B. F. Oliveira, Alexandra I. Cristea, and Leandro S. G. Carvalho. 2020. Deep learning for early performance prediction of introductory programming students: a comparative and explanatory study. Revista Brasileira de Informática na Educação, Vol. 28, 0 (Oct. 2020), 723--748. https://doi.org/10.5753/rbie.2020.28.0.723 Number: 0.
[20]
Sebastian Raschka. 2018. Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning. https://doi.org/10.48550/ARXIV.1811.12808
[21]
Anthony V. Robins. 2019. Novice Programmers and Introductory Programming. In The Cambridge Handbook of Computing Education Research, Sally A. Fincher and Anthony V.Editors Robins (Eds.). Cambridge University Press, Cambridge, 327--376. https://doi.org/10.1017/9781108654555.013
[22]
Simon, Andrew Luxton-Reilly, Vangel V. Ajanovski, Eric Fouh, Christabel Gonsalvez, Juho Leinonen, Jack Parkinson, Matthew Poole, and Neena Thota. 2019. Pass Rates in Introductory Programming and in other STEM Disciplines. In Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education (ITiCSE-WGR '19). Association for Computing Machinery, New York, NY, USA, 53--71. https://doi.org/10.1145/3344429.3372502
[23]
Arto Vihavainen, Thomas Vikberg, Matti Luukkainen, and Martin Pärtel. 2013. Scaffolding students' learning using test my code. In Proceedings of the 18th ACM conference on Innovation and technology in computer science education (ITiCSE '13). Association for Computing Machinery, New York, NY, USA, 117--122. https://doi.org/10.1145/2462476.2462501
[24]
Christopher Watson, Frederick W.B. Li, and Jamie L. Godwin. 2013. Predicting Performance in an Introductory Programming Course by Logging and Analyzing Student Programming Behavior. In 2013 IEEE 13th International Conference on Advanced Learning Technologies. IEEE, San Diego, CA, 319--323. https://doi.org/10.1109/ICALT.2013.99 ISSN: 2161-377X.

Cited By

View all

Index Terms

  1. G is for Generalisation: Predicting Student Success from Keystrokes

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGCSE 2023: Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1
      March 2023
      1481 pages
      ISBN:9781450394314
      DOI:10.1145/3545945
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 03 March 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. computing education
      2. educational data mining
      3. learning analytics
      4. predicting performance
      5. programming process data

      Qualifiers

      • Research-article

      Conference

      SIGCSE 2023
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,595 of 4,542 submissions, 35%

      Upcoming Conference

      SIGCSE TS 2025
      The 56th ACM Technical Symposium on Computer Science Education
      February 26 - March 1, 2025
      Pittsburgh , PA , USA

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 139
        Total Downloads
      • Downloads (Last 12 months)45
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 24 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media