skip to main content
10.1145/2361276.2361288acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

A fast measure for identifying at-risk students in computer science

Published: 09 September 2012 Publication History

Abstract

How do we identify students who are at risk of failing our courses? Waiting to accumulate sufficient assessed work incurs a substantial lag in identifying students who need assistance. We want to provide students with support and guidance as soon as possible to reduce the risk of failure or disengagement. In small classes we can monitor students more directly and mark graded assessments to provide feedback in a relatively short time but large class sizes, where it is most easy for students to disappear and ultimately drop out, pose a much greater challenge. We need reliable and scalable mechanisms for identifying at-risk students as quickly as possible, before they disengage, drop out or fail. The volumes of student information retained in data warehouse and business intelligence systems are often not available to lecturing staff, who can only observe the course-level marks for previous study and participation behaviour in the current course, based on attendance and assignment submission.
We have identified a measure of ``at-risk'' behaviour that depends upon the timeliness of initial submissions of any marked activity. By analysing four years of electronic submissions over our school's student body we have extracted over 220,000 individual records, spanning over 1900 students, to establish that early electronic submission behaviour provides can provide a reliable indicator of future behaviour. By measuring the impact on a student's Grade Point Average (GPA) we can show that knowledge of assignment submission and current course level provides a reliable guide to student performance.

References

[1]
S. Alexander, M. Clark, K. Loose, J. Amillo, M. Daniels, R. Boyle, C. Laxer, and D. Shinners-Kennedy. Case studies in admissions to and early performance in computer science degrees. SIGCSE Bull., 35:137--147, June 2003.
[2]
B. Burd, J. Goulden, B. Ladd, M. Rogers, and K. Stewart. Computer games in the classroom, or, how to get perfect attendance, even at 8 am. In Proceedings of the 38th SIGCSE technical symposium on Computer science education, SIGCSE '07, pages 496--496, New York, NY, USA, 2007. ACM.
[3]
J. Carter, K. Ala-Mutka, U. Fuller, M. Dick, J. English, W. Fone, and J. Sheard. How shall we assess this? SIGCSE Bull., 35:107--123, June 2003.
[4]
D. Chinn. Peer assessment in the algorithms course. In Proceedings of the 10th annual SIGCSE conference on Innovation and technology in computer science education, ITiCSE '05, pages 69--73, New York, NY, USA, 2005. ACM.
[5]
R. I. M. Dunbar. Neocortex size as a constraint on group size in primates. Journal of Human Evolution, 20:469--493, 1992.
[6]
U. Fuller, C. G. Johnson, T. Ahoniemi, D. Cukierman, I. Hernán-Losada, J. Jackova, E. Lahtinen, T. L. Lewis, D. M. Thompson, C. Riedesel, and E. Thompson. Developing a computer science-specific learning taxonomy. In Working group reports on ITiCSE on Innovation and technology in computer science education, ITiCSE-WGR '07, pages 152--170, New York, NY, USA, 2007. ACM.
[7]
J. Hamer, H. C. Purchase, P. Denny, and A. Luxton-Reilly. Quality of peer assessment in cs1. In Proceedings of the fifth international workshop on Computing education research workshop, ICER '09, pages 27--36, New York, NY, USA, 2009. ACM.
[8]
M. Hertz. What do "CS1" and "CS2" mean?: investigating differences in the early courses. In Proceedings of the 41st ACM technical symposium on Computer science education, SIGCSE '10, pages 199--203, New York, NY, USA, 2010. ACM.
[9]
T. Howles. Community and accountability in a first year programming sequence. SIGCSE Bull., 37:99--102, June 2005.
[10]
D. Janzen and H. Saiedian. Test-driven learning in early programming courses. In Proceedings of the 39th SIGCSE technical symposium on Computer science education, SIGCSE '08, pages 532--536, New York, NY, USA, 2008. ACM.
[11]
R. Lister, E. S. Adams, S. Fitzgerald, W. Fone, J. Hamer, M. Lindholm, R. McCartney, J. E. Moström, K. Sanders, O. Seppälä, B. Simon, and L. Thomas. A multi-national study of reading and tracing skills in novice programmers. SIGCSE Bull., 36:119--150, June 2004.
[12]
B. L. McCombs and R. J. Marzano. Putting the Self in Self-Remulated Learning: The Sleft as Agent in Iintegrating Will and Skill. Educational Psychologist, 25:3--17, 1990.
[13]
M. McCracken, V. Almstrum, D. Diaz, M. Guzdial, D. Hagan, Y. B.-D. Kolikant, C. Laxer, L. Thomas, I. Utting, and T. Wilusz. A multi-national, multi-institutional study of assessment of programming skills of first-year cs students. In Working group reports from ITiCSE on Innovation and technology in computer science education, ITiCSE-WGR '01, pages 125--180, New York, NY, USA, 2001. ACM.
[14]
C. McDowell, L. Werner, H. Bullock, and J. Fernald. The effects of pair-programming on performance in an introductory programming course. In Proceedings of the 33rd SIGCSE technical symposium on Computer science education, SIGCSE '02, pages 38--42, New York, NY, USA, 2002. ACM.
[15]
A. H. Miller, B. W. Imrie, and K. Cox. Student Assessment in Higher Education: a Handbook for Assessing Performance. Routledge, 1998.
[16]
A. Moffat, B. Hughes, H. Søndergaard, and P. Gruba. Making connections: first year transition for computer science and software engineering students. In Proceedings of the 7th Australasian conference on Computing education - Volume 42, ACE '05, pages 229--238, Darlinghurst, Australia, Australia, 2005. Australian Computer Society, Inc.
[17]
G. Rößling, M. Joy, A. Moreno, A. Radenski, L. Malmi, A. Kerren, T. Naps, R. J. Ross, M. Clancy, A. Korhonen, R. Oechsle, and J. A. V. Iturbide. Enhancing learning management systems to better support computer science education. SIGCSE Bull., 40:142--166, November 2008.
[18]
A. E. Tew and M. Guzdial. Developing a validated assessment of fundamental cs1 concepts. In Proceedings of the 41st ACM technical symposium on Computer science education, SIGCSE '10, pages 97--101, New York, NY, USA, 2010. ACM.
[19]
B. J. Zimmerman. Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25:3--17, 1990.

Cited By

View all

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICER '12: Proceedings of the ninth annual international conference on International computing education research
September 2012
174 pages
ISBN:9781450316040
DOI:10.1145/2361276
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 September 2012

Permissions

Request permissions for this article.

Check for updates

Author Tag

  1. engagement

Qualifiers

  • Research-article

Conference

ICER '12
Sponsor:
ICER '12: International Computing Education Research Conference
September 9 - 11, 2012
Auckland, New Zealand

Acceptance Rates

Overall Acceptance Rate 189 of 803 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)25
  • Downloads (Last 6 weeks)3
Reflects downloads up to 07 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media