skip to main content
10.1145/3324884.3415288acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
short-paper

PerfCI: a toolchain for automated performance testing during continuous integration of Python projects

Published: 27 January 2021 Publication History

Abstract

Software performance testing is an essential quality assurance mechanism that can identify optimization opportunities. Automating this process requires strong tool support, especially in the case of Continuous Integration (CI) where tests need to run completely automatically and it is desirable to provide developers with actionable feedback. A lack of existing tools means that performance testing is normally left out of the scope of CI. In this paper, we propose a toolchain - PerfCI - to pave the way for developers to easily set up and carry out automated performance testing under CI. Our toolchain is based on allowing users to (1) specify performance testing tasks, (2) analyze unit tests on a variety of python projects ranging from scripts to full-blown flask-based web services, by extending a performance analysis framework (VyPR) and (3) evaluate performance data to get feedback on the code. We demonstrate the feasibility of our toolchain by using it on a web service running at the Compact Muon Solenoid (CMS) experiment at the world's largest particle physics laboratory --- CERN.
Package. Source code, example and documentation of PerfCI are available: https://gitlab.cern.ch/omjaved/perfci. Tool demonstration can be viewed on YouTube: https://youtu.be/RDmXMKA1v7g. We also provide the data set used in the analysis: https://gitlab.cern.ch/omjaved/perfci-dataset.

References

[1]
Ezio Bartocci, Yliès Falcone, Adrian Francalanza, and Giles Reger. 2018. Introduction to Runtime Verification. In Lectures on RV.
[2]
Grady Booch. 1990. Object-Oriented Design with Applications.
[3]
J. Burnim, S. Juvekar, and K. Sen. 2009. WISE: Automated test generation for worst-case complexity. In ICSE.
[4]
CERN. 2020. https://home.cern/science/computing/processing-what-record. (Accessed on 05/26/2020).
[5]
Jesús Mauricio Chimento, Wolfgang Ahrendt, and Gerardo Schneider. 2018. Testing Meets Static and Runtime Verification. In FormaliSE@ICSE.
[6]
CircleCI. 2020. https://circleci.com. (Accessed on 05/25/2020).
[7]
David Daly, William Brown, Henrik Ingo, Jim O'Leary, and David Bradford. 2020. The Use of Change Point Detection to Identify Software Performance Regressions in a Continuous Integration System. In ICPE.
[8]
Joshua H Dawes. 2017. A Python object-oriented framework for the CMS alignment and calibration data. In Journal of Physics: Conference Series.
[9]
Joshua Heneage Dawes, Marta Han, Giles Reger, Giovanni Franzoni, and Andreas Pfeiffer. 2019. Analysis Tools for the VyPR Framework for Python. In CHEP.
[10]
Joshua Heneage Dawes and Giles Reger. 2019. Explaining Violations of Properties in Control-Flow Temporal Logic. In RV.
[11]
Joshua Heneage Dawes, Giles Reger, Giovanni Franzoni, Andreas Pfeiffer, and Giacomo Govi. 2019. VyPR2: A Framework for Runtime Verification of Python Web Services. In TACAS.
[12]
T. Durieux, R. Abreu, M. Monperrus, T. F. Bissyandé, and L. Cruz. 2019. An Analysis of 35+ Million Jobs of Travis CI. In ICSME.
[13]
GitLab. 2020. https://docs.gitlab.com/ee/ci/. (Accessed on 05/25/2020).
[14]
Xue Han, Tingting Yu, and David Lo. 2018. PerfLearner: Learning from Bug Reports to Understand and Generate Performance Test Frames. In ASE.
[15]
Guoliang Jin, Linhai Song, Xiaoming Shi, Joel Scherpelz, and Shan Lu. 2012. Understanding and Detecting Real-World Performance Bugs. In PLDI.
[16]
C. Laaber and P. Leitner. 2018. An Evaluation of Open-Source Software Microbenchmark Suites for Continuous Performance Assessment. In 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR). 119--130.
[17]
Rashmi Mudduluru and Murali Krishna Ramanathan. 2016. Efficient Flow Profiling for Detecting Performance Bugs. In ISSTA.
[18]
Jenkins CI performance plugin. 2020. http://jenkinsci.github.io/performance-plugin/RunTests.html. (Accessed on 05/29/2020).
[19]
The Register. 2002. https://www.theregister.co.uk/2002/07/03/1901_census_site_still_down/. (Accessed on 05/25/2020).
[20]
D. G. Reichelt, S. Kühne, and W. Hasselbring. 2019. PeASS: A Tool for Identifying Performance Changes at Code Level. In ASE.
[21]
TravisCI. 2020. https://travis-ci.org/. (Accessed on 05/25/2020).
[22]
Jan Waller, Nils C. Ehmke, and Wilhelm Hasselbring. 2015. Including Performance Benchmarks into Continuous Integration to Enable DevOps. In SIGSOFT Softw. Eng. Notes.
[23]
Fiorella Zampetti, Carmine Vassallo, Sebastiano Panichella, Gerardo Canfora, Harald Gall, and Massimiliano Di Penta. 2020. An empirical characterization of bad practices in continuous integration. In ESE.

Cited By

View all

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASE '20: Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering
December 2020
1449 pages
ISBN:9781450367684
DOI:10.1145/3324884
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 January 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Short-paper

Conference

ASE '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 82 of 337 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)53
  • Downloads (Last 6 weeks)2
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media