skip to main content
10.1145/1595696.1595725acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

MSeqGen: object-oriented unit-test generation via mining source code

Published: 24 August 2009 Publication History

Abstract

An objective of unit testing is to achieve high structural coverage of the code under test. Achieving high structural overage of object-oriented code requires desirable method-call sequences that create and mutate objects. These sequences help generate target object states such as argument or receiver object states (in short as target states) of a method under test. Automatic generation of sequences for achieving target states is often challenging due to a large search space of possible sequences. On the other hand, code bases using object types (such as receiver or argument object types) include sequences that can be used to assist automatic test-generation approaches in achieving target states. In this paper, we propose a novel approach, called MSeqGen, that mines code bases and extracts sequences related to receiver or argument object types of a method under test. Our approach uses these extracted sequences to enhance two state-of-the-art test-generation approaches: random testing and dynamic symbolic execution. We conduct two evaluations to show the effectiveness of our approach. Using sequences extracted by our approach, we show that a random testing approach achieves 8.7% (with a maximum of 20.0% for one namespace) higher branch coverage and a dynamic-symbolic-execution-based approach achieves 17.4% (with a maximum of 22.5% for one namespace) higher branch coverage than without using our approach. Such an improvement is significant as the branches that are not covered by these state-of-the-art approaches are generally quite difficult to cover.

References

[1]
M. Acharya, T. Xie, and J. Xu. Mining Interface Specifications for Generating Checkable Robustness Properties. In Proc. ISSRE, pages 311--320, 2006.
[2]
R. Agrawal and R. Srikant. Fast algorithms for mining association rules in large databases. In Proc. VLDB, pages 487--499, 1994.
[3]
L. Clarke. A System to Generate Test Data and Symbolically Execute Programs. IEEE Trans. Softw. Eng., 2(3):215--222, 1976.
[4]
T. H. Cormen, C. Stein, R. L. Rivest, and C. E. Leiserson. Introduction to Algorithms. McGraw-Hill Higher Education, 2001.
[5]
C. Csallner and Y. Smaragdakis. JCrasher: an automatic robustness tester for Java. Softw. Pract. Exper., 34(11):1025--1050, 2004.
[6]
J. Duran and M. Ntafos. An evaluation of random testing. IEEE Trans. Softw. Eng., 10(4):438--444, 1984.
[7]
S. Elbaum, H. N. Chin, M. B. Dwyer, and J. Dokulil. Carving differential unit test cases from system test cases. In Proc. FSE, pages 253--264, 2006.
[8]
D. Engler, D. Y. Chen, S. Hallem, A. Chou, and B. Chelf. Bugs as deviant behavior: a general approach to inferring errors in systems code. In Proc. SOSP, pages 57--72, 2001.
[9]
Facebook developer toolkit, 2008. http://www.codeplex.com/FacebookToolkit.
[10]
P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated random testing. In Proc. PLDI, pages 213--223, 2005.
[11]
K. Inkumsah and T. Xie. Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In Proc. ASE, pages 297--306, 2008.
[12]
Parasoft. Jtest manuals version 5.1. Online manual, 2006. http://www.parasoft.com.
[13]
S. Khurshid, C. S. Pasareanu, and W. Visser. Generalized symbolic execution for model checking and testing. In Proc. TACAS, pages 553--568, 2003.
[14]
J. C. King. Symbolic Execution and Program Testing. Communications of the ACM, 19(7):385--394, 1976.
[15]
S. Koushik, M. Darko, and A. Gul. CUTE: a concolic unit testing engine for C. In Proc. ESEC/FSE, pages 263--272, 2005.
[16]
X. Liu, H. Liu, B. Wang, P. Chen, and X. Cai. A unified fitness function calculation rule for flag conditions to improve evolutionary testing. In Proc. ASE, pages 337--341, 2005.
[17]
A. Orso and B. Kennedy. Selective capture and replay of program executions. SIGSOFT Softw. Eng. Notes, 30(4):1--7, 2005.
[18]
C. Pacheco and M. D. Ernst. Eclat: Automatic generation and classification of test inputs. In Proc. ECOOP, pages 504--527, 2005.
[19]
C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball. Feedback-directed random test generation. In Proc. ICSE, pages 75--84, 2007.
[20]
QuickGraph: A 100% C# graph library with Graphviz Support, Version 2.0, 2008. http://www.codeproject.com/KB/miscctrl/quickgraph.aspx.
[21]
D. Saff, S. Artzi, J. H. Perkins, and M. D. Ernst. Automatic test factoring for Java. In Proc. ASE, pages 114--123, 2005.
[22]
Y. Song, S. Thummalapenta, and T. Xie. UnitPlus: Assisting developer testing in eclipse. In Proc. ETX, pages 26--30, 2007.
[23]
S. Thummalapenta and T. Xie. PARSEWeb: A programmer assistant for reusing open source code on the web. In Proc. ASE, pages 204--213, 2007.
[24]
S. Thummalapenta and T. Xie. Mining exception-handling rules as sequence association rules. In Proc. ICSE, pages 496--506, 2009.
[25]
N. Tillmann and J. de Halleux. Pex white box test generation for .NET. In Proc. TAP, pages 134--153, 2008.
[26]
N. Tillmann and W. Schulte. Parameterized Unit Tests. In Proc. ESEC/FSE, pages 253--262, 2005.
[27]
P. Tonella. Evolutionary testing of classes. In Proc. ISSTA, pages 119--128, 2004.
[28]
J. Wang and J. Han. BIDE: Efficient mining of frequent closed sequences. In Proc. ICDE, pages 79--88, 2004.
[29]
A. Wasylkowski, A. Zeller, and C. Lindig. Detecting object usage anomalies. In Proc. ESEC/FSE, pages 35--44, 2007.
[30]
T. Xie, D. Marinov, and D. Notkin. Rostra: A framework for detecting redundant object-oriented unit tests. In Proc. ASE, pages 196--205, 2004.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEC/FSE '09: Proceedings of the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
August 2009
408 pages
ISBN:9781605580012
DOI:10.1145/1595696
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 August 2009

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. object-oriented testing
  2. sequence mining

Qualifiers

  • Research-article

Conference

ESEC/FSE09
Sponsor:
ESEC/FSE09: Joint 12th European Software Engineering Conference
August 24 - 28, 2009
Amsterdam, The Netherlands

Acceptance Rates

ESEC/FSE '09 Paper Acceptance Rate 32 of 217 submissions, 15%;
Overall Acceptance Rate 112 of 543 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)32
  • Downloads (Last 6 weeks)1
Reflects downloads up to 04 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media