skip to main content
10.1145/3239235.3240497acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

An empirical study of inadequate and adequate test suite reduction approaches

Published: 11 October 2018 Publication History

Abstract

Background. Regression testing is conducted after changes are made to a system in order to ensure that these changes did not alter its expected behavior. The problem with regression testing is that it can require too much time and/or too many resources. This is why researchers have defined a number of regression testing approaches. Among these, Test Suite Reduction (TSR) approaches reduce the size of the original test suites, while preserving their capability to detect faults. TSR approaches can be classified as adequate or inadequate. Adequate approaches reduce test suites so that they completely preserve the test requirements (e.g., statement coverage) of the original test suite, while inadequate ones produce reduced test suites that partially preserve these test requirements.
Aims. We studied adequate and inadequate TSR approaches in terms of tradeoff between reduction in test suite size and loss in fault detection capability. We also considered three different kinds of test requirements (i.e., statement, method, and class coverages).
Method. We conducted an experiment with six adequate (e.g., HGS) and 12 inadequate (e.g., the inadequate version of HGS) TSR approaches. In this experiment, we considered 19 experimental objects from a public dataset, i.e., SIR (Software-artifact Infrastructure Repository).
Results. The most important result from our experiment is that inadequate approaches, as compared with adequate ones, allow achieving a better tradeoff between reduction in test suite size and loss in fault detection capability. This is especially true when these approaches are applied by considering statement and method coverages as test requirements.
Conclusions. Although our results are not definitive, they might help the tester to chose both TSR approach and kind of code coverage that is closer to her needs when testing a software system.

References

[1]
Tsong Yueh Chen and Man Fai Lau. 1996. Dividing Strategies for the Optimization of a Test Suite. Inf. Process. Lett. 60, 3 (1996), 135--141.
[2]
Tsong Yueh Chen and Man Fai Lau. 1998. A simulation study on some heuristics for test suite reduction. Information & Software Technology 40,13 (1998), 777--787.
[3]
Carmen Coviello, Simone Romano, and Giuseppe Scanniello. 2018. Poster: CUTER: ClUstering-based TEst suite Reduction. In Proceedings of International Conference on Software Engineering Companion.
[4]
Carmen Coviello, Simone Romano, Giuseppe Scanniello, Alessandro Marchetto, Giuliano Antoniol, and Anna Corazza. 2018. Clustering support for inadequate test suite reduction. In Proceedings of International Conference on Software Analysis, Evolution and Reengineering, Vol. 00. 95--105.
[5]
Hyunsook Do, Sebastian Elbaum, and Gregg Rothermel. 2005. Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and Its Potential Impact. Empirical Softw. Engg. 10, 4 (2005), 405--435.
[6]
Milton Friedman. 1937. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Amer. Statist. Assoc. 32, 200 (1937), 675--701.
[7]
M. Jean Harrold, Rajiv Gupta, and Mary Lou Soffa. 1993. A Methodology for Controlling the Size of a Test Suite. ACM Trans. Softw. Eng. Methodol. 2, 3 (1993), 270--285.
[8]
René Just, Darioush Jalali, and Michael D. Ernst. 2014. Defects4J: A Database of Existing Faults to Enable Controlled Testing Studies for Java Programs. In Proceedings of International Symposium on Software Testing and Analysis (ISSTA 2014). ACM, 437--440.
[9]
Zheng Li, Mark Harman, and Robert M. Hierons. 2007. Search Algorithms for Regression Test Case Prioritization. IEEE Trans. Softw. Eng. 33, 4 (2007), 225--237.
[10]
Alessandro Marchetto, Md. Mahfuzul Islam, Waseem Asghar, Angelo Susi, and Giuseppe Scanniello. 2016. A Multi-Objective Technique to Prioritize Test Cases. IEEE Trans. Softw. Eng. 42, 10 (2016), 918--940.
[11]
Shari Lawrence Pfleeger and Winifred Menezes. 2000. Marketing technology to software practitioners. IEEE Software 17, 1 (2000), 27--33.
[12]
Simone Romano, Giuseppe Scanniello, Giuliano Antoniol, and Alessandro Marchetto. 2018. SPIRITuS: a SimPle Information Retrieval regressIon Test Selection approach. Information and Software Technology 99 (2018), 62 -- 80.
[13]
Gregg Rothermel, Mary Jean Harrold, Jeffery Ostrin, and Christie Hong. 1998. An Empirical Study of the Effects of Minimization on the Fault Detection Capabilities of Test Suites. In Proceedings of the International Conference on Software Maintenance (ICSM '98). IEEE, 34--43.
[14]
Gregg Rothermel, Mary Jean Harrold, Jeffery von Ronne, and Christie Hong. 2002. Empirical studies of test-suite reduction. Softw. Test. Verif. Reliab. 12, 4 (2002), 219--249.
[15]
August Shi, Alex Gyori, Milos Gligoric, Andrey Zaytsev, and Darko Marinov. 2014. Balancing Trade-offs in Test-suite Reduction. In Proceedings of International Symposium on Foundations of Software Engineering. ACM, 246--256.
[16]
August Shi, Tifany Yung, Alex Gyori, and Darko Marinov. 2015. Comparing and Combining Test-suite Reduction and Regression Test Selection. In Proceedings of Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, 237--247.
[17]
Adam M. Smith, Joshua Geiger, Gregory M. Kapfhammer, and Mary Lou Soffa. 2007. Test Suite Reduction and Prioritization with Call Trees. In Proceedings of International Conference on Automated Software Engineering (ASE '07). ACM, 539--540.
[18]
Adam M. Smith and Gregory M. Kapfhammer. 2009. An Empirical Study of Incorporating Cost into Test Suite Reduction and Prioritization. In Proceedings of Symposium on Applied Computing. ACM, 461--467.
[19]
Sriraman Tallam and Neelam Gupta. 2005. A Concept Analysis Inspired Greedy Algorithm for Test Suite Minimization. In Proceedings of Program Analysis for Software Tools and Engineering. ACM, 35--42.
[20]
Claes Wohlin, Per Runeson, Martin Hst, Magnus C. Ohlsson, Bjrn Regnell, and Anders Wessln. 2012. Experimentation in Software Engineering. Springer Publishing Company, Incorporated.
[21]
Eric W. Wong, Joseph R. Horgan, Saul London, and Aditya P. Mathur. 1995. Effect of Test Set Minimization on Fault Detection Effectiveness. In Proceedings of International Conference on Software Engineering (ICSE '95). ACM, 41--50.
[22]
Eric W. Wong, Joseph R. Horgan, Aditya P. Mathur, and Alberto Pasquini. 1999. Test Set Size Minimization and Fault Detection Effectiveness: A Case Study in a Space Application. J. Syst. Softw. 48, 2 (Oct. 1999), 79--89.
[23]
Shin Yoo and Mark Harman. 2012. Regression Testing Minimization, Selection and Prioritization: A Survey. Softw. Test. Verif. Reliab. 22, 2 (2012), 67--120.
[24]
Lingming Zhang, Darko Marinov, Lu Zhang, and Sarfraz Khurshid. 2011. An Empirical Study of JUnit Test-Suite Reduction. In Proceedings of International Symposium on Software Reliability Engineering (ISSRE '11). IEEE, 170--179.
[25]
Hao Zhong, Lu Zhang, and Hong Mei. 2008. An Experimental Study of Four Typical Test Suite Reduction Techniques. Inf. Softw. Technol. 50, 6 (May 2008), 534--546.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEM '18: Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
October 2018
487 pages
ISBN:9781450358231
DOI:10.1145/3239235
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 October 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. controlled experiment
  2. regression testing
  3. test suite reduction

Qualifiers

  • Research-article

Conference

ESEM '18
Sponsor:

Acceptance Rates

Overall Acceptance Rate 130 of 594 submissions, 22%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)17
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media