skip to main content
10.1145/3576882.3617910acmconferencesArticle/Chapter ViewAbstractPublication PagescompedConference Proceedingsconference-collections
research-article

Validating a Language-Independent CS1 Learning Outcomes Assessment

Published: 05 December 2023 Publication History

Abstract

Assessing learning outcomes in computer science education is essential as it is an indicator of student progress, the effectiveness of teaching methods, and areas for improvement. Aptitude tests have been widely used to measure these learning outcomes; however, they are not without their issues with reliability, difficulty, and applicability across courses and institutions. To address these issues, this study aims to contribute to the development of a reliable, language-independent testing instrument that accurately evaluates students' performance, capabilities, and grasp of the learning outcomes from an introductory computer science course. In this study, we employed the Second Computer Science 1 Exam Revised version 2 (SCS1Rv2) as a post-assessment tool to measure learning outcomes. The SCS1Rv2 was administered in three CS1 course sections, and the results were compared with the final grades of the students. The validation of the SCS1Rv2 was done using Item Response Theory where the test was assessed for its difficulty and reliability. We found that the SCS1Rv2 is a reasonable predictor of course learning outcomes. The intent of this study is to aid in the creation of a standardized, reliable, and effective testing instrument that can be used across different courses and institutions. The SCS1Rv2 has the potential to be a valuable tool in its development.

References

[1]
Frank Baker and Seock-Ho Kim. 2004. Item Response Theory: Parameter Estimation Techniques 2 ed.). CRC Press.
[2]
College Board. 2010. 2009 AP computer science a released exam. The College Board, New York.
[3]
Ryan Bockmon and Chris Bourke. 2023. Validation of the Placement Skill Inventory: A CS0/CS1 Placement Exam. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 39--45. https://doi.org/10.1145/3545945.3569738
[4]
Ryan Bockmon and Stephen Cooper. 2022. What's Your Placebo? Commun. ACM 65, 10 (sep 2022), 31--33. https://doi.org/10.1145/3528085
[5]
Ryan Bockmon, Stephen Cooper, Jonathan Gratch, and Mohsen Dorodchi. 2019. (Re)Validating Cognitive Introductory Computing Instruments. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). Association for Computing Machinery, New York, NY, USA, 552--557. https://doi.org/10.1145/3287324.3287372
[6]
Allison Elliott Tew, Brian Dorn, and Oliver Schneider. 2012. Toward a Validated Computing Attitudes Survey. In Proceedings of the Ninth Annual International Conference on International Computing Education Research (Auckland, New Zealand) (ICER '12). Association for Computing Machinery, New York, NY, USA, 135--142. https://doi.org/10.1145/2361276.2361303
[7]
Miranda C. Parker, Mark Guzdial, and Shelly Engleman. 2016. Replication, Validation, and Use of a Language Independent CS1 Knowledge Assessment. In Proceedings of the 2016 ACM Conference on International Computing Education Research (Melbourne, VIC, Australia) (ICER '16). Association for Computing Machinery, New York, NY, USA, 93--101. https://doi.org/10.1145/2960310.2960316
[8]
Markeya S Peteranetz and Anthony D Albano. 2020. Development and Evaluation of the Nebraska Assessment of Computing Knowledge. Frontiers in Computer Science 2 (2020), 11.
[9]
Allison Elliott Tew and Mark Guzdial. 2011. The FCS1: A Language Independent Assessment of CS1 Knowledge (SIGCSE '11). Association for Computing Machinery, New York, NY, USA, 111--116. https://doi.org/10.1145/1953163.1953200

Index Terms

  1. Validating a Language-Independent CS1 Learning Outcomes Assessment

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CompEd 2023: Proceedings of the ACM Conference on Global Computing Education Vol 1
    December 2023
    180 pages
    ISBN:9798400700484
    DOI:10.1145/3576882
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 December 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. CS1
    2. assessment
    3. item response theory
    4. validation

    Qualifiers

    • Research-article

    Conference

    CompEd 2023
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 33 of 100 submissions, 33%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 45
      Total Downloads
    • Downloads (Last 12 months)45
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 14 Sep 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media