skip to main content
10.1145/3243127.3243130acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

Automatically assessing vulnerabilities discovered by compositional analysis

Published: 03 September 2018 Publication History

Abstract

Testing is the most widely employed method to find vulnerabilities in real-world software programs. Compositional analysis, based on symbolic execution, is an automated testing method to find vulnerabilities in medium- to large-scale programs consisting of many interacting components. However, existing compositional analysis frameworks do not assess the severity of reported vulnerabilities. In this paper, we present a framework to analyze vulnerabilities discovered by an existing compositional analysis tool and assign CVSS3 (Common Vulnerability Scoring System v3.0) scores to them, based on various heuristics such as interaction with related components, ease of reachability, complexity of design and likelihood of accepting unsanitized input. By analyzing vulnerabilities reported with CVSS3 scores in the past, we train simple machine learning models. By presenting our interactive framework to developers of popular open-source software and other security experts, we gather feedback on our trained models and further improve the features to increase the accuracy of our predictions. By providing qualitative (based on community feedback) and quantitative (based on prediction accuracy) evidence from 21 open-source programs, we show that our severity prediction framework can effectively assist developers with assessing vulnerabilities.

References

[1]
{n. d.}. GNOME Bugzilla. https://bugzilla.gnome.org/.
[2]
{n. d.}. National Vulnerability Database (NVD). https://nvd.nist.gov/. 2018. CVSS3 assessment using machine learning. https://osf.io/g87ny/?view_ only=21bf55c48263432f9b00702f84b23711
[3]
L. Allodi and F. Massacci. 2014. Comparing vulnerability severity and exploits using case-control studies. ACM Transactions on Information and System Security (TISSEC) (2014).
[4]
P. Ayari, K.and Meshkinfam et al. 2007. Threats on building models from cvs and bugzilla repositories: the mozilla case study. In Conference of the center for advanced studies on Collaborative research. IBM Corp., 215–228.
[5]
N. Bettenburg, S. Just, A. Schröter, C. Weiss, R. Premraj, and T. Zimmermann. 2008. What makes a good bug report?. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering.
[6]
C. Cadar, D. Dunbar, and D. Engler. 2008. KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs. In OSDI.
[7]
C. Cadar and K. Sen. 2013. Symbolic execution for software testing: three decades later. ACM Communications (2013).
[8]
K. Chaturvedi and V. Singh. 2012. Determining bug severity using machine learning techniques. In Software Engineering (CONSEG), 2012 CSI Sixth International Conference on.
[9]
M. Christakis and C. Bird. 2016. What developers want and need from program analysis: an empirical study. In International Conference on Automated Software Engineering.
[10]
M. Christakis and P. Godefroid. 2015. IC-Cut: A compositional search strategy for dynamic test generation. In Model Checking Software.
[11]
A. Dobrovoljc, D. Trček, and B. Likar. 2017. Predicting Exploitations of Information Systems Vulnerabilities Through Attackers Characteristics. IEEE Access (2017).
[12]
K. El Emam, W. Melo, and J. Machado. 2001. The prediction of faulty classes using object-oriented design metrics. Journal of Systems and Software (2001).
[13]
I. Erete and A. Orso. 2011. Optimizing constraint solving to better support symbolic execution. In ICSTW.
[14]
P. Godefroid, M. Levin, et al. 2008. Automated Whitebox Fuzz Testing. In NDSS.
[15]
J. Hennessy and D. Patterson. 2011. Computer architecture: a quantitative approach. Elsevier.
[16]
H. Holm and K. Afridi. 2015. An expert-based investigation of the common vulnerability scoring system. Computers & Security (2015).
[17]
S. Houmb, V. Franqueira, and E. Engum. 2010. Quantifying security risk level from CVSS estimates of frequency and impact. Journal of Systems and Software (2010).
[18]
J. King. 1976. Symbolic execution and program testing. ACM Communications (1976).
[19]
A. Lamkanfi, S. Demeyer, et al. 2011. Comparing mining algorithms for predicting the severity of a reported bug. In Software Maintenance and Reengineering (CSMR), 2011 15th European Conference on.
[20]
C. Lattner and V. Adve. 2004. LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation. In International Symposium on Code Generation and Optimization (CGO).
[21]
Q. Liu and Y. Zhang. 2011. VRSS: A new system for rating and scoring vulnerabilities. Computer Communications (2011).
[22]
K. Ma, K. Phang, et al. 2011. Directed symbolic execution. In SAS.
[23]
R. Majumdar and R. Xu. 2009. Reducing test inputs using information partitions. In CAV.
[24]
P. Mell, K. Scarfone, and S. Romanosky. 2007. A complete guide to the common vulnerability scoring system version 2.0. In Published by FIRST-Forum of Incident Response and Security Teams.
[25]
T. Menzies and A. Marcus. 2008. Automated severity assessment of software defect reports. In Software Maintenance, 2008. ICSM 2008. IEEE International Conference on.
[26]
N. Nagappan and T. Ball. 2005. Static analysis tools as early indicators of prerelease defect density. In International conference on Software engineering.
[27]
N. Nagappan, T. Ball, and A. Zeller. 2006. Mining metrics to predict component failures. In International conference on Software engineering.
[28]
S. Ognawala, M. Ochoa, A. Pretschner, and T. Limmer. 2016. MACKE: Compositional analysis of low-level vulnerabilities with symbolic execution. In ASE.
[29]
F. Palomba, M. Zanoni, F. Fontana, A. De Lucia, and R. Oliveto. 2016. Smells like teen spirit: Improving bug prediction performance using the intensity of code smells. In Software Maintenance and Evolution (ICSME), 2016 IEEE International Conference on.
[30]
F. Pedregosa, G. Varoquaux, et al. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research (2011).
[31]
A. Pretschner. 2003. Compositional generation of MC/DC integration test suites. Electronic Notes in Theoretical Computer Science 82 (2003).
[32]
institution = FIRST.Org, Inc Scarfone, K. and others. 2016. Common Vulnerability Scoring System v3.0: Specification Document. Technical Report.
[33]
K. Sen, D. Marinov, and G. Agha. 2005. CUTE: a concolic unit testing engine for C. In ACM SIGSOFT Software Engineering Notes.
[34]
H. Shen, J. Fang, and J. Zhao. 2011. Efindbugs: Effective error ranking for findbugs. In Software Testing, Verification and Validation (ICST), 2011 IEEE Fourth International Conference on.
[35]
Y. Tian, D. Lo, and C. Sun. 2012. Information retrieval based nearest neighbor classification for fine-grained bug severity prediction. In Reverse Engineering (WCRE), 2012 19th Working Conference on.
[36]
M. Torchiano, M. Morisio, et al. 2010. Assessing the precision of findbugs by mining java projects developed at a university. In Mining Software Repositories (MSR), 2010 7th IEEE Working Conference on.
[37]
R. Wang, L. Gao, Q. Sun, and D. Sun. 2011. An improved CVSS-based vulnerability scoring mechanism. In Multimedia Information Networking and Security (MINES), 2011 Third International Conference on.
[38]
T. Wen, Y. Zhang, Y. Dong, and G. Yang. 2015. A novel automatic severity vulnerability assessment framework. Journal of Communications (2015).

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MASES 2018: Proceedings of the 1st International Workshop on Machine Learning and Software Engineering in Symbiosis
September 2018
52 pages
ISBN:9781450359726
DOI:10.1145/3243127
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 September 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. compositional analysis
  2. software testing
  3. symbolic execution
  4. vulnerability assessment

Qualifiers

  • Research-article

Conference

ASE '18
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)14
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media