skip to main content
10.1109/ISSRE.2004.1guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

A Comparison of Bug Finding Tools for Java

Published: 02 November 2004 Publication History

Abstract

Bugs in software are costly and difficult to find and fix. In recent years, many tools and techniques have been developed for automatically finding bugs by analyzing source code or intermediate code statically (at compile time). Different tools and techniques have different tradeoffs, but the practical impact of these tradeoffs is not well understood. In this paper, we apply five bug finding tools, specifically Bandera, ESC/Java 2, FindBugs, JLint, and PMD, to a variety of Java programs. By using a variety of tools, we are able to cross-check their bug reports and warnings. Our experimental results show that none of the tools strictly subsumes another, and indeed the tools often find non-overlapping bugs. We discuss the techniques each of the tools is based on, and we suggest how particular techniques affect the output of the tools. Finally, we propose a meta-tool that combines the output of the tools together, looking for particular lines of code, methods, and classes that many tools warn about.

Cited By

View all

Recommendations

Reviews

Andrew Brooks

Static analysis tools hold great potential in software quality assurance, but how effective are they__?__ Five Java bug finding tools (Bandera, ESC/Java, FindBugs, Jlint, and PMD) were applied to five programs that ranged in size from around 8,000 to 55,000 lines of code. The tools often generated well over 1,000 warnings on any one program, so the authors decided to manually examine several dozen warnings only. Their analysis revealed that there was a wide variation in the warnings provided by the tools, that some warnings were not about real defects (false positives), and that a single bug can create a cascade of warnings. Given the absence of a single best bug finding tool, the authors propose two metrics: the normalized warning count, and the unique warning total, measured at an individual Java class level across the output from several bug finding tools. Metric results for the poorest performing classes (Figure 10) suggest that classes with an unusually high warning count tend also to have a larger breadth of unique warnings. These results, however, do not fully have proof-of-concept value since no analysis was undertaken to match the real defects present in the code with the warnings reported by the tools. This paper serves as a useful introduction to static analysis tools for Java, and makes several recommendations to improve these tools, whose usefulness is severely comprised by the sheer volume of warnings. As such, this paper is strongly recommended to those using and researching static analysis tools. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
ISSRE '04: Proceedings of the 15th International Symposium on Software Reliability Engineering
November 2004
441 pages
ISBN:0769522157

Publisher

IEEE Computer Society

United States

Publication History

Published: 02 November 2004

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media