skip to main content
10.1145/1414004.1414029acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

The impact of time controlled reading on software inspection effectiveness and efficiency: a controlled experiment

Published: 09 October 2008 Publication History

Abstract

Reading techniques help to guide reviewers during individual software inspections. In this experiment, we completely transfer the principle of statistical usage testing to inspection reading techniques for the first time. Statistical usage testing relies on a usage profile to determine how intensively certain parts of the system shall be tested from the users' perspective. Usage-based reading applies statistical usage testing principles by utilizing prioritized use cases as a driver for inspecting software artifacts (e.g., design). In order to reflect how intensively certain use cases should be inspected, time budgets are introduced to usage-based reading where a maximum inspection time is assigned to each use case. High priority use cases receive more time than low priority use cases. A controlled experiment is conducted with 23 Software Engineering M.Sc. students inspecting a design document. In this experiment, usage-based reading without time budgets is compared with time controlled usage-based reading. The result of the experiment is that time budgets do not significantly improve inspection performance. In conclusion, it is sufficient to only use prioritized use cases to successfully transfer statistical usage testing to inspections.

References

[1]
B. Anda, H. C. Benestad, and S. E. Hove. A multiple-case study of software effort estimation based on use case points. In Proc. of the fourth International Symposium on Empirical Software Engineering (ISESE 2005), pages 407--416, 2005.
[2]
C. Andersson, T. Thelin, P. Runeson, and N. Dzamashvili. An experimental evaluation of inspection and testing for detection of design faults. In Proc. of the 2nd International Symposium on Empirical Software Engineering (ISESE 2003), pages 174--184, 2003.
[3]
A. Aurum, H. Petersson, and C.Wohlin. State-of-the-art: software inspections after 25 years. Softw. Test., Verif. Reliab., 12(3):133--154, 2002.
[4]
V. R. Basili, S. Green, O. Laitenberger, F. Shull, S. S., and M. V. Zelkowitz. The empirical investigation of perspective-based reading. Technical report, University of Maryland at College Park, College Park, MD, USA, 1995.
[5]
D. B. Bisant and J. R. Lyle. A two-person inspection method to improve programming productivity. IEEE Transactions on Software Engineering, 15(10):1294--1304, 1989.
[6]
S. G. Eick, C. R. Loader, M. D. Long, L. G. Votta, and S. A. V. Wiel. Estimating software fault content before coding. In Proc. of the 14th International Conference on Software Engineering (ICSE 1992), pages 59--65, 1992.
[7]
M. E. Fagan. Design and code inspections to reduce errors in program development. IBM Systems Journal, 15(3):182--211, 1976.
[8]
P. M. Johnson. An instrumented approach to improving software quality through formal technical review. In Proc. of the 16th International Conference on Software Engineering (ICSE 1994), pages 113--122, 1994.
[9]
J. Karlsson and K. Ryan. A cost-value approach for prioritizing requirements. IEEE Software, 14(5):67--74, 1997.
[10]
S. Kusumoto, F. Matsukawa, K. Inoue, S. Hanabusa, and Y. Maegawa. Estimating effort by use case points: Method, tool and case study. In Proc. of the 10th IEEE International Software Metrics Symposium (METRICS 2004), pages 292--299, 2004.
[11]
O. Laitenberger, C. Atkinson, M. Schlich, and K. El-Emam. An experimental comparison of reading techniques for defect detection in uml design documents. Journal of Systems and Software, 53(2):183--204, 2000.
[12]
O. Laitenberger, K. El-Emam, and T. G. Harbich. An internally replicated quasi-experimental comparison of checklist and perspective-based reading of code documents. IEEE Transactions on Software Engineering, 27(5):387--421, 2001.
[13]
S. Lauesen. Software Requirements: Styles and Techniques. Addison-Wesley Professional, 2002.
[14]
D. Leffingwell and D. Widrig. Managing Software Requirements: A Use Case Approach (2nd Edition) (The Addison-Wesley Object Technology Series). Addison-Wesley Professional, 2003.
[15]
R. C. Linger. Cleanroom process model. IEEE Software, 11(2):50--58, 1994.
[16]
J. Martin and W. T. Tsai. N-fold inspection: A requirements analysis technique. Communications of the ACM, 33(2):225--232, 1990.
[17]
V. Mashayekhi, J. M. Drake,W.-T. Tsai, and J. Riedl. Distributed, collaborative software inspection. IEEE Software, 10(5):66--75, 1993.
[18]
J. D. Musa. Software Reliability Engineering: More Reliable Software Faster and Cheaper 2nd Edition. AuthorHouse, 2004.
[19]
I. Sommerville. Software Engineering: (Update) (8th Edition) (International Computer Science Series). Addison Wesley, 2006.
[20]
T. Thelin, C. Andersson, P. Runeson, and N. Dzamashvili-Fogelström. A replicated experiment of usage-based and checklist-based reading. In Proc. of the 10th IEEE International Software Metrics Symposium (METRICS 2004), pages 246--256, 2004.
[21]
T. Thelin, P. Runeson, and B. Regnell. Usage-based reading an experiment to guide reviewers with use cases. Information & Software Technology, 43(15):925--938, 2001.
[22]
T. Thelin, P. Runeson, and C. Wohlin. An experimental comparison of usage-based and checklist-based reading. IEEE Transactions on Software Engineering, 29(8):687--704, 2003.
[23]
T. Thelin, P. Runeson, and C. Wohlin. Prioritized use cases as a vehicle for software inspections. IEEE Software, 20(4):30--33, 2003.
[24]
T. Thelin, P. Runeson, C.Wohlin, T. Olsson, and C. Andersson. How much information is needed for usage-based reading? - a series of experiments. In Proc. of the 1st International Symposium on Empirical Software Engineering (ISESE 2002), pages 127--138, 2002.
[25]
L. G. Votta. Does every inspection need a meeting? In Proc. of the 1st ACM SIGSOFT Symposium on Foundations of Software Engineering (FSE 1993), pages 107--114, 1993.
[26]
G. H.Walton, J. H. Poore, and C. J. Trammell. Statistical testing of software based on a usage model. Software: Practice and Experience, 25(1):97--108, 1995.
[27]
D. Winkler, M. Halling, and S. Biffl. Investigating the effect of expert ranking of use cases for design inspection. In Proc. of the 30th EUROMICRO Conference 2004, pages 362--371, 2004.
[28]
C. Wohlin, B. Regnell, A. Wesslén, and H. Cosmo. User-centered software engineering: A comprehensive view of software development. In Proc. of the Nordic Seminar on Dependable Computing Systems, pages 229--240, 1994.
[29]
C. Wohlin and P. Runeson. Defect content estimations from review data. In Proc. of the 20th International Conference on Software Engineering (ICSE 1998), pages 400--409, 1998.
[30]
C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén. Experimentation in Software Engineering: An Introduction (International Series in Software Engineering). Springer, 2000.
[31]
I.-T. Z.120. Message sequence charts, msc, itu-t recommendation z.120, 1996.

Cited By

View all

Index Terms

  1. The impact of time controlled reading on software inspection effectiveness and efficiency: a controlled experiment

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ESEM '08: Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
      October 2008
      374 pages
      ISBN:9781595939715
      DOI:10.1145/1414004
      • General Chair:
      • Dieter Rombach,
      • Program Chairs:
      • Sebastian Elbaum,
      • Jürgen Münch
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 09 October 2008

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. effectiveness
      2. efficiency
      3. experiment
      4. software inspection
      5. time-controlled usage-based reading

      Qualifiers

      • Research-article

      Conference

      ESEM '08
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 130 of 594 submissions, 22%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)8
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 06 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media