skip to main content
10.1145/1852786.1852800acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Test case selection and prioritization: risk-based or design-based?

Published: 16 September 2010 Publication History

Abstract

The objective of this qualitative study was to observe and empirically study how software organizations decide on which test cases to select for their software projects. As the software test processes are limited in resources such as time or money, a selection process usually exists for tested features. In this study we conducted a survey on 31 software-producing organizations, and interviewed 36 software professionals from 12 focus organizations to gain a better insight into testing practices. Our findings indicated that the basic approaches to test case selection are usually oriented towards two possible objectives. One is the risk-based selection, where the aim is to focus testing on those parts that are too expensive to fix after launch. The other is design-based selection, where the focus is on ensuring that the software is capable of completing the core operations it was designed to do. These results can then be used to develop testing organizations and to identify better practices for test case selection.

References

[1]
Bertolino, A., "The (Im)maturity Level of Software Testing", ACM SIGSOFT Software Engineering Notes, Vol. 29(5), 2004, pp. 1--4.
[2]
Boehm, B., "Get Ready for the Agile Methods, with Care", Computer, Vol. 35(1), 2002, pp. 64--69.
[3]
Chen, Y. Probert, R. L. and Sims, D. P., "Specification-based Regression Test Selection with Risk Analysis", Proc. 2002 conference of the Centre for Advanced Studies on Collaborative research, 30.9.--03.10., Toronto, Ontario, Canada, 2002.
[4]
Do, H. and Rothermel, G., "An Empirical Study of Regression Testing Techniques Incorporating Context and Lifetime Factors and Improved Cost-Benefit Models", Proc. 14th ACM SIGSOFT international symposium on Foundations of software engineering, 5--11.11., Portland, Oregon, USA, 2006, pp. 141--151.
[5]
Eisenhardt, K. M., "Building theories from case study research", Academy of Management Review, Vol. 14, pp. 532--550, 1989.
[6]
EU, "SME Definition," European Commission, 2003.
[7]
Gill, N. S., "Factors Affecting Effective Software Quality Management Rivisited", ACM SIGSOFT Software Engineering Notes, Vol. 30(2), 2005, pp. 1--4.
[8]
Glaser, B. and Strauss, A. L., The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago: Aldine, 1967.
[9]
Huang, L. and Boehm, B., "How Much Software Quality Investment Is Enough: A Value-Based Approach", IEEE Software, Vol. 23(5), 2006, pp. 88--95.
[10]
Hulkko, H. and Abrahamsson, P., "A Multiple Case Study on the Impact of Pair Programming on Product Quality", Proc. 27th international conference on Software engineering, 15.--21.5., St. Louis, MO, USA, 2005, pp. 495--504.
[11]
ISO/IEC, ISO/IEC 15504-1, Information Technology -- Process Assessment - Part 1: Concepts and Vocabulary, 2002.
[12]
ISO/IEC, ISO/IEC 25010-2, Software Engineering -- Software product Quality Requirements and Evaluation (SQuaRE) Quality Model, 2008.
[13]
ISO/IEC, ISO/IEC 29119-2, Software Testing Standard -- Activity Descriptions for Test Process Diagram, 2008.
[14]
Kaner, C., Falk, J. and Nguyen, H. Q., Testing Computer Software, 2nd edition, John Wiley & Sons, Inc., New York, USA, 1999.
[15]
Karhu, K., Repo, T., Taipale, O. and Smolander, K., "Empirical Observation on Software Test Automation", Proc. 2nd International Conference on Software Testing, Verification and Validation (ICST), 1--4.4., Denver, Colorado, USA, 2009.
[16]
Karhu, K., Taipale, O. and Smolander, K., "Outsourcing and Knowledge Management in Software Testing", Proc. 11th International Conference on Evaluation and Assessment in Software Engineering (EASE), 2--3.04., Staffordshire, England, 2007.
[17]
Kasurinen, J., Taipale, O. and Smolander, K., "Analysis of Problems in Testing Practices", proc. 16th Asia-Pacific Software Engineering Conference (APSEC), 1--3.12., Penang, Malaysia, 2009.
[18]
Kit, E., "Software Testing in the Real World: Improving the Process", Addison-Wesley, Reading, MA, USA, 1995.
[19]
Klein, H. K. and Myers, M. D., "A set of principles for conducting and evaluating interpretive field studies in information systems", MIS Quarterly, Vol. 23, pp. 67--94, 1999.
[20]
Kolb, R. and Muthig, D., "Making Testing Product Lines More Efficient by Improving the Testability of Product Line Architectures", Proc. ISSTA 2006 workshop on Role of software architecture for testing and analysis, 17--20.7., Portland, Maine, USA, 2006, pp. 22--27.
[21]
Mao, C., Lu, Y. and Zhang, J., "Regression Testing for Component-based Software via Built-in Test Design", Proc. 2007 ACM Symposium on Applied Computing, 11--15.3., Seoul, South Korea, pp. 1416--1421.
[22]
Meyer, B., "Design and code reviews in the age of the Internet", Communications of the ACM, Vol. 51(9), 2008, pp. 66--71.
[23]
Nørgaard, M. and Hornbæk, K., "What Do Usability Evaluators Do in Practice? An Explorative Study of Think-Aloud Testing", Proc. 6th Conference on Designing Interactive Systems, 26--28.6., University Park, PA, USA, 2006, pp. 209--218.
[24]
Pare', G. and Elam, J. J., "Using case study research to build theories of IT Implementation", IFIP TC8 WG International Conference on Information Systems and Qualitative Research, Philadelphia, USA, 1997.
[25]
Petschenik, N. H., "Practical Priorities in System Testing", IEEE Software, Vol. 2(5), 1985, pp. 18--23.
[26]
Redmill, F., "Exploring risk-based testing and its implications", Software Testing, Verification and Reliability, Vol. 14(1), 2004, pp. 3--15.
[27]
Rosas-Vega, R. and Vokurka, R. J., "New product introduction delays in the computer industry", Industrial Management & Data Systems, Vol. 100 (4), 2000, pp. 157--163.
[28]
Rothermel, G., Elbaum, S., Malishevsky, A. G., Kallakuri, P. and Qiu, X., "On Test Suite Composition and Cost-Effective Regression Testing", ACM Transactions on Software Engineering and Methodology, Vol. 13(3), 2004, pp. 277--331.
[29]
Seaman, C. B., "Qualitative methods in empirical studies of software engineering", IEEE Transactions on Software Engineering, Vol. 25, pp. 557--572, 1999.
[30]
Shi, Q., "A Field Study of the Relationship and Communication between Chinese Evaluators and Users in Thinking Aloud Usability Tests", Proc. 5th Nordic conference on Human-computer interaction: building bridges, 20--22.10., Lund, Sweden, 2008, pp. 344--352.
[31]
Strauss, A. and Corbin, J., Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: SAGE Publications, 1990.
[32]
Taipale, O., and Smolander, K., "Improving Software Testing by Observing Practice", Proc. 5th ACM-IEEE International Symposium on Empirical Software Engineering (ISESE), 21--22.9., Rio de Janeiro, Brazil, 2006, pp. 262--271.
[33]
Tassey, G., "The Economic impacts of inadequate infrastructure for software testing", U.S. National Institute of Standards and Technology report, RTI Project Number 7007.011, 2002.
[34]
TMMi Foundation, Test Maturity Model integration (TMMi) reference model, Version 2.0, 2009.
[35]
Voas, J., Payne, J., Mills, R. and McManus, J., "Software Testability, An Experiment in Measuring Simulation Reusability", ACM SIGSOFT Software Engineering Notes, Vol. 20, Issue SI, 1995, pp. 247--255.
[36]
Whittager, J. A., "What is Software Testing? And Why Is It So Hard?", IEEE Software, Vol. 17(1), 2000, pp. 70--79.
[37]
Onwuegbuzie, A. J. and Leech, N. L., "Validity and Qualitative Research: An Oxymoron?", Quality and Quantity, Vol. 41(2), April 2007, pp. 233--249.
[38]
Yoo, S. and Harman, M., "Pareto Efficient Multi-Objective Test Case Selection", Proc. 2007 international symposium on Software testing and analysis, 9--12.7., London, England, pp. 140--150.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEM '10: Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
September 2010
423 pages
ISBN:9781450300391
DOI:10.1145/1852786
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 September 2010

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. empirical study
  2. grounded theory
  3. software testing
  4. test case selection

Qualifiers

  • Research-article

Conference

ESEM '10
Sponsor:

Acceptance Rates

ESEM '10 Paper Acceptance Rate 30 of 102 submissions, 29%;
Overall Acceptance Rate 130 of 594 submissions, 22%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)8
  • Downloads (Last 6 weeks)1
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media