skip to main content
10.1145/2025113.2025180acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
demonstration

SCORE: a scalable concolic testing tool for reliable embedded software

Published: 09 September 2011 Publication History

Abstract

Current industrial testing practices often generate test cases in a manual manner, which degrades both the effectiveness and efficiency of testing. To alleviate this problem, concolic testing generates test cases that can achieve high coverage in an automated fashion. One main task of concolic testing is to extract symbolic information from a concrete execution of a target program at runtime. Thus, a design decision on how to extract symbolic information affects efficiency, effectiveness, and applicability of concolic testing. We have developed a Scalable COncolic testing tool for REliable embedded software (SCORE) that targets embedded C programs. SCORE instruments a target C program to extract symbolic information and applies concolic testing to a target program in a scalable manner by utilizing a large number of distributed computing nodes. In this paper, we describe our design decisions that are implemented in SCORE and demonstrate the performance of SCORE through the experiments on the SIR benchmarks.

References

[1]
Scratchbox - cross-compilation toolkit. http://www.scratchbox.org/.
[2]
S. Bucur, V. Ureche, C. Zamfir, and G. Candea. Parallel symbolic execution for automated real-world software testing. In 6th ACM SIGOPS/EuroSys, 2011.
[3]
J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. Technical Report UCB/EECS-2008-123, EECS Department, University of California, Berkeley, Sep 2008.
[4]
C. Cadar, D. Dunbar, and D. Engler. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In Operating System Design and Implementation, 2008.
[5]
H. Do, S. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering Journal, 10(4):405--435, 2005.
[6]
Amazon Elastic Compute Cloud (Amazon EC2). http://aws.amazon.com/ec2/.
[7]
P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated random testing. In Programming Language Design and Implementation, 2005.
[8]
P. Godefroid, M. Y. Levin, and D. Molnar. Automated whitebox fuzz testing. In Network and Distributed Systems Security, 2008.
[9]
M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria. In International Conference on Software Engineering, pages 191--200, 1994.
[10]
K. Jayaraman, D. Harvison, V. Ganesh, and A. Kiezun. jFuzz: A concolic whitebox fuzzer for Java. In NASA Formal Methods Symposium, 2009.
[11]
M. Kim, Y. Kim, and G. Rothermel. A scalable distributed concolic testing approach. In Automated Software Engineering, 2011. under review.
[12]
Y. Kim, M. Kim, and Y. Jang. Concolic testing on embedded software - case studies on mobile platform programs. In Foundations of Software Engineering (FSE), 2011.
[13]
A. King. Distributed parallel symbolic execution. Technical report, Kansas State University, 2009. MS thesis.
[14]
C. Lattner and V. Adve. LLVM: A compilation framework for lifelong program analysis & transformation, 2004.
[15]
C. Pasareanu and W. Visser. A survey of new trends in symbolic execution for software testing and analysis. Software Tools for Tech. Transfer, 11(4):339--353, 2009.
[16]
K. Sen and G. Agha. CUTE and jCUTE : Concolic unit testing and explicit path model-checking tools. In Computer Aided Verification, 2006.
[17]
K. Sen, D. Marinov, and G. Agha. CUTE: A concolic unit testing engine for C. In European Software Engineering Conference/Foundations of Software Engineering, 2005.
[18]
M. Staats and C. Pasareanu. Parallel symbolic execution for structural test generation. In International Symposium on Software Testing and Analysis, 2010.
[19]
N. Tillmann and W. Schulte. Parameterized unit tests. In European Software Engineering Conference/Foundations of Software Engineering, 2005.
[20]
W. Visser, K. Havelund, G. Brat, and S. Park. Model checking programs. In Automated Software Engineering, September 2000.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEC/FSE '11: Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering
September 2011
548 pages
ISBN:9781450304436
DOI:10.1145/2025113
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 September 2011

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. distributed concolic testing
  2. embedded software

Qualifiers

  • Demonstration

Conference

ESEC/FSE'11
Sponsor:

Acceptance Rates

Overall Acceptance Rate 17 of 128 submissions, 13%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media