skip to main content
10.1145/3341525.3387430acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article
Open access

Automated Assessment of Android Exercises with Cloud-native Technologies

Published: 15 June 2020 Publication History

Abstract

Mobile applications are very challenging to test as they usually have a complex graphical user interface and advanced functionality that involves interacting with remote services. Due to these features, student assessment in courses about mobile application development usually relies on assignments or projects that are manually checked by teaching assistants for grading. This approach clearly does not scale to large classrooms, especially for online courses. This article presents a novel system for automated assessment of Android exercises with cloud-native technologies. Different from the state of the art, the proposed solution leverages a mobile app testing framework that is largely used in the industry instead of custom libraries. Furthermore, the devised system employs software containers and scales with the availability of resources in a data center, which is essential for massive open online courses. The system design and implementation is detailed, together with the results from a deployment within a master-level course with 120 students. The received feedback demonstrates that the proposed solution was effective, as it provided insightful feedback and supported independent learning of mobile application development.

References

[1]
Kirsti M. Ala-Mutka. 2005. A Survey of Automated Assessment Approaches for Programming Assignments. Computer Science Education, Vol. 15, 2 (2005), 83--102. https://doi.org/10.1080/08993400500150747
[2]
Anthony Allevato and Stephen H. Edwards. 2012. RoboLIFT: Engaging CS2 Students with Testable, Automatically Evaluated Android Applications. In The 43rd ACM Technical Symposium on Computer Science Education (SIGCSE '12). 547--552. https://doi.org/10.1145/2157136.2157293
[3]
David Bernstein. 2014. Containers and Cloud: From LXC to Docker to Kubernetes. IEEE Cloud Computing, Vol. 1, 3 (Sep. 2014), 81--84. https://doi.org/10.1109/MCC.2014.51
[4]
Michael David Black. 2016. Seven Semesters of Android Game Programming in CS2. In The 2016 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE '16). 5--10. https://doi.org/10.1145/2899415.2899470
[5]
Douglas S Blank, David Bourgin, Alexander Brown, Matthias Bussonnier, Jonathan Frederic, Brian Granger, Thomas L Griffiths, Jessica Hamrick, Kyle Kelley, M Pacer, et al. 2019. nbgrader: A tool for creating and grading assignments in the Jupyter Notebook. The Journal of Open Source Education, Vol. 2, 11 (2019).
[6]
Barry Burd, Jo ao Paulo Barros, Chris Johnson, Stan Kurkovsky, Arnold Rosenbloom, and Nikolai Tillman. 2012. Educating for Mobile Computing: Addressing the New Challenges. In The Final Reports on Innovation and Technology in Computer Science Education 2012 Working Groups (ITiCSE-WGR '12). 51--63. https://doi.org/10.1145/2426636.2426641
[7]
Julio C. Caiza and José Maria del Álamo Ramiro. 2013. Programming assignments automatic grading: review of tools and implementations. In The 7th International Technology, Education and Development Conference (INTED2013) .
[8]
Riccardo Coppola, Maurizio Morisio, and Marco Torchiano. 2019. Mobile GUI Testing Fragility: A Study on Open-Source Android Applications. IEEE Transactions on Reliability, Vol. 68, 1 (2019), 67--90. https://doi.org/10.1109/TR.2018.2869227
[9]
Christopher Douce, David Livingstone, and James Orwell. 2005. Automatic Test-Based Assessment of Programming: A Review. J. Educ. Resour. Comput., Vol. 5, 3, Article 4 (Sep 2005), 13 pages. https://doi.org/10.1145/1163405.1163409
[10]
Wes Felter, Alexandre Ferreira, Ram Rajamony, and Juan Rubio. 2015. An updated performance comparison of virtual machines and Linux containers. In 2015 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). 171--172. https://doi.org/10.1109/ISPASS.2015.7095802
[11]
James B. Fenwick, Jr., Barry L. Kurtz, and Joel Hollingsworth. 2011. Teaching Mobile Computing and Developing Software to Support Computer Science Education. In The 42nd ACM Technical Symposium on Computer Science Education (SIGCSE '11). 589--594. https://doi.org/10.1145/1953163.1953327
[12]
Brian Fitzgerald and Klaas-Jan Stol. 2017. Continuous software engineering: A roadmap and agenda. Journal of Systems and Software, Vol. 123 (2017), 176 -- 189. https://doi.org/10.1016/j.jss.2015.06.063
[13]
Daniel Galan, Ruben Heradio, Hector Vargas, Ismael Abad, and Jose A. Cerrada. 2019. Automated Assessment of Computer Programming Practices: The 8-Years UNED Experience. IEEE Access, Vol. 7 (August 2019), 130113--130119. https://doi.org/10.1109/ACCESS.2019.2938391
[14]
Ivaylo Ilinkin. 2014. Opportunities for Android Projects in a CS1 Course. In The 45th ACM Technical Symposium on Computer Science Education (SIGCSE '14). 615--620. https://doi.org/10.1145/2538862.2538983
[15]
Ville Karavirta, Petri Ihantola, and Teemu Koskinen. 2013. Service-Oriented Approach to Improve Interoperability of E-Learning Systems. In 2013 IEEE 13th International Conference on Advanced Learning Technologies. 341--345. https://doi.org/10.1109/ICALT.2013.105
[16]
Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2016. Towards a Systematic Review of Automated Feedback Generation for Programming Exercises. In The 2016 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE '16). 41--46. https://doi.org/10.1145/2899415.2899422
[17]
Tobias Kohn. 2019. The Error Behind The Message: Finding the Cause of Error Messages in Python. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). 524--530. https://doi.org/10.1145/3287324.3287381
[18]
Leonardo Leite, Carla Rocha, Fabio Kon, Dejan Milojicic, and Paulo Meirelles. 2019. A Survey of DevOps Concepts and Challenges. ACM Comput. Surv., Vol. 52, 6, Article 127 (Nov. 2019), 35 pages. https://doi.org/10.1145/3359981
[19]
Evan Maicus, Matthew Peveler, Stacy Patterson, and Barbara Cutler. 2019. Autograding Distributed Algorithms in Networked Containers. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). 133--138. https://doi.org/10.1145/3287324.3287505
[20]
Tony Mason, Ada Gavrilovska, and David A. Joyner. 2019. Collaboration Versus Cheating: Reducing Code Plagiarism in an Online MS Computer Science Program. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). 1004--1010. https://doi.org/10.1145/3287324.3287443
[21]
Wes McKinney. 2011. pandas: a foundational Python library for data analysis and statistics. Python for High Performance and Scientific Computing, Vol. 14 (2011).
[22]
Michael Milligan. 2017. Interactive HPC Gateways with Jupyter and Jupyterhub. In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact (PEARC17). Article 63, 4 pages. https://doi.org/10.1145/3093338.3104159
[23]
Claus Pahl, Pooyan Jamshidi, and Olaf Zimmermann. 2018. Architectural Principles for Cloud Software. ACM Trans. Internet Technol., Vol. 18, 2, Article 17 (Feb. 2018), 23 pages. https://doi.org/10.1145/3104028
[24]
Lilian Passos Scatalon, Ellen Francine Barbosa, and Rogerio Eduardo Garcia. 2017. Challenges to integrate software testing into introductory programming courses. In 2017 IEEE Frontiers in Education Conference (FIE). 1--9. https://doi.org/10.1109/FIE.2017.8190557
[25]
Matthew Peveler, Evan Maicus, and Barbara Cutler. 2019. Comparing Jailed Sandboxes vs Containers Within an Autograding System. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). 139--145. https://doi.org/10.1145/3287324.3287507
[26]
Matthew Peveler, Jeramey Tyler, Samuel Breese, Barbara Cutler, and Ana Milanova. 2017. Submitty: An Open Source, Highly-Configurable Platform for Grading of Programming Assignments. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. 641. https://doi.org/10.1145/3017680.3022384
[27]
Susan Reardon and Brendan Tangney. 2014. Smartphones, Studio-Based Learning, and Scaffolding: Helping Novices Learn to Program. ACM Trans. Comput. Educ., Vol. 14, 4, Article 23 (Dec. 2014), 15 pages. https://doi.org/10.1145/2677089
[28]
Amit Shesh. 2019. Allowing and Fully Supporting Multiple Programming Languages in a Computer Graphics Course: An Experience. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). 239--245. https://doi.org/10.1145/3287324.3287464
[29]
Draylson M. Souza, Katia R. Felizardo, and Ellen. F. Barbosa. 2016. A Systematic Literature Review of Assessment Tools for Programming Assignments. In 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET). 147--156. https://doi.org/10.1109/CSEET.2016.48
[30]
Oleksii Starov, Sergiy Vilkomir, Anatoliy Gorbenko, and Vyacheslav Kharchenko. 2015. Testing-as-a-Service for Mobile Applications: State-of-the-Art Survey. In Dependability Problems of Complex Information Systems. 55--71. https://doi.org/10.1007/978--3--319-08964--5_4
[31]
Kelvin Sung and Arjmand Samuel. 2014. Mobile Application Development Classes for the Mobile Era. In The 2014 Conference on Innovation and Technology in Computer Science Education (ITiCSE '14). 141--146. https://doi.org/10.1145/2591708.2591710
[32]
Burkhard C. Wünsche, Zhen Chen, Lindsay Shaw, Thomas Suselo, Kai-Cheung Leung, Davis Dimalen, Wannes van der Mark, Andrew Luxton-Reilly, and Richard Lobb. 2018. Automatic Assessment of OpenGL Computer Graphics Assignments. In The 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE 2018). 81--86. https://doi.org/10.1145/3197091.3197112

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ITiCSE '20: Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education
June 2020
615 pages
ISBN:9781450368742
DOI:10.1145/3341525
This work is licensed under a Creative Commons Attribution-ShareAlike International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 June 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. UI testing
  2. android
  3. automated grading
  4. computer science education
  5. full-stack
  6. mobile app development
  7. online learning
  8. software containers

Qualifiers

  • Research-article

Conference

ITiCSE '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 552 of 1,613 submissions, 34%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)127
  • Downloads (Last 6 weeks)11
Reflects downloads up to 22 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media