skip to main content
10.1145/3324884.3416539acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

MockSniffer: characterizing and recommending mocking decisions for unit tests

Published: 27 January 2021 Publication History

Abstract

In unit testing, mocking is popularly used to ease test effort, reduce test flakiness, and increase test coverage by replacing the actual dependencies with simple implementations. However, there are no clear criteria to determine which dependencies in a unit test should be mocked. Inappropriate mocking can have undesirable consequences: under-mocking could result in the inability to isolate the class under test (CUT) from its dependencies while over-mocking increases the developers' burden on maintaining the mocked objects and may lead to spurious test failures. According to existing work, various factors can determine whether a dependency should be mocked. As a result, mocking decisions are often difficult to make in practice. Studies on the evolution of mocked objects also showed that developers tend to change their mocking decisions: 17% of the studied mocked objects were introduced sometime after the test scripts were created and another 13% of the originally mocked objects eventually became unmocked. In this work, we are motivated to develop an automated technique to make mocking recommendations to facilitate unit testing. We studied 10,846 test scripts in four actively maintained open-source projects that use mocked objects, aiming to characterize the dependencies that are mocked in unit testing. Based on our observations on mocking practices, we designed and implemented a tool, MockSniffer, to identify and recommend mocks for unit tests. The tool is fully automated and requires only the CUT and its dependencies as input. It leverages machine learning techniques to make mocking recommendations by holistically considering multiple factors that can affect developers' mocking decisions. Our evaluation of MockSniffer on ten open-source projects showed that it outperformed three baseline approaches, and achieved good performance in two potential application scenarios.

References

[1]
2013. CAMEL-6826 apache/camel@dae6366. Retrieved May 2020 from https://github.com/apache/camel/commit/dae6366
[2]
2013. hadoop/hadoopTestCachingKeyProvider.java at branch-3.2.1 apache/hadoop. Retrieved May 2020 from https://github.com/apache/hadoop/blob/branch-3.2.1/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestCachingKeyProvider.java#L59
[3]
2020. EasyMock. Retrieved May 2020 from https://easymock.org/
[4]
2020. EvoSuite MockList. Retrieved May 2020 from https://github.com/EvoSuite/evosuite/blob/master/runtime/src/main/java/org/evosuite/runtime/mock/MockList.java
[5]
2020. [FLINK-16300] ASF JIRA. Retrieved May 2020 from https://issues.apache.org/jira/browse/FLINK-16300
[6]
2020. hadoop/TestJobHistoryEventHandler.java at branch-3.2.1 apache/hadoop. Retrieved May 2020 from https://github.com/apache/hadoop/blob/branch-3.2.1/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java#L1084
[7]
2020. jMock - An Expressive Mock Object Library for Java. Retrieved May 2020 from http://jmock.org/
[8]
2020. Mickito frmework site. Retrieved May 2020 from https://site.mockito.org/
[9]
2020. moq/moq4: Repo for managing Moq 4.x - GitHub. Retrieved May 2020 from https://github.com/moq/moq4
[10]
2020. Sinon.JS - Standalone test fakes, spies, stubs and mocks for JavaScript. Works with any unit testing framework. Retrieved May 2020 from https://sinonjs.org/
[11]
Alfred V Aho. 1988. Compilers : principles, techniques, and tools. Addison-Wesley Pub. Co., Reading, Mass.
[12]
Nadia Alshahwan, Yue Jia, Kiran Lakhotia, Gordon Fraser, David Shuler, and Paolo Tonella. 2010. AUTOMOCK: Automated Synthesis of a Mock Environment for Test Case Generation. In Practical Software Testing: Tool Automation and Human Factors, 14.03. - 19.03.2010 (Dagstuhl Seminar Proceedings), Vol. 10111. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Germany. http://drops.dagstuhl.de/opus/volltexte/2010/2618/
[13]
Andrea Arcuri, Gordon Fraser, and Juan Pablo Galeotti. 2014. Automated unit test generation for classes with environment dependencies. In ACM/IEEE International Conference on Automated Software Engineering, ASE '14, Vasteras, Sweden - September 15 - 19, 2014. ACM, 79--90.
[14]
Andrea Arcuri, Gordon Fraser, and Juan Pablo Galeotti. 2015. Generating TCP/UDP network data for automated unit test generation. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, Bergamo, Italy, August 30 - September 4, 2015. ACM, 155--165.
[15]
Andrea Arcuri, Gordon Fraser, and René Just. 2017. Private API Access and Functional Mocking in Automated Unit Test Generation. In 2017 IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, Tokyo, Japan, March 13--17, 2017. IEEE Computer Society, 126--137.
[16]
Thilini Bhagya, Jens Dietrich, and Hans W. Guesgen. 2019. Generating Mock Skeletons for Lightweight Web-Service Testing. In 26th Asia-Pacific Software Engineering Conference, APSEC 2019, Putrajaya, Malaysia, December 2--5, 2019. IEEE, 181--188.
[17]
Bernhard E. Boser, Isabelle Guyon, and Vladimir Vapnik. 1992. A Training Algorithm for Optimal Margin Classifiers. In Proceedings of the Fifth Annual ACM Conference on Computational Learning Theory, COLT 1992, Pittsburgh, PA, USA, July 27--29, 1992. ACM, 144--152.
[18]
Leo Breiman. 1996. Bias, variance, and arcing classifiers. Technical Report. Tech. Rep. 460, Statistics Department, University of California, Berkeley ....
[19]
Leo Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. 1984. Classification and Regression Trees. Wadsworth.
[20]
John W Creswell and Cheryl N Poth. 2016. Qualitative inquiry and research design: Choosing among five approaches. Sage publications.
[21]
The Apache Software Foundation. 2018. The Apache Software Foundation Announces Apache® Oozie(TM) v5.0.0. Retrieved May 2020 from http://www.globenewswire.com/news-release/2018/04/18/1481007/0/en/The-Apache-Software-Foundation-Announces-Apache-Oozie-TM-v5-0-0.html
[22]
Gordon Fraser and Andrea Arcuri. 2011. EvoSuite: automatic test suite generation for object-oriented software. In SIGSOFT/FSE'11 19th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-19) and ESEC'11: 13th European Software Engineering Conference (ESEC-13), Szeged, Hungary, September 5--9, 2011. ACM, 416--419.
[23]
Yoav Freund and Robert E. Schapire. 1995. A decision-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory, Second European Conference, EuroCOLT '95, Barcelona, Spain, March 13--15, 1995, Proceedings (Lecture Notes in Computer Science), Vol. 904. Springer, 23--37.
[24]
Jerome H. Friedman. 2001. Greedy function approximation: A gradient boosting machine. Ann. Statist. 29, 5 (2001), 1189--1232.
[25]
Karl Pearson F.R.S. 1900. X. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 50, 302 (1900), 157--175.
[26]
Stefan J. Galler, Andreas Maller, and Franz Wotawa. 2010. Automatically extracting mock object behavior from Design by Contract specification for test data generation. In The 5th Workshop on Automation of Software Test, AST 2010, May 3--4, 2010, Cape Town, South Africa. ACM, 43--50.
[27]
Qingzhou Luo, Farah Hariri, Lamyaa Eloussi, and Darko Marinov. 2014. An empirical analysis of flaky tests. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, (FSE-22), Hong Kong, China, November 16 - 22, 2014. 643--653.
[28]
Tim Mackinnon, Steve Freeman, and Philip Craig. 2001. Endo-Testing: Unit Testing with Mock Objects. Addison-Wesley Longman Publishing Co., Inc., USA, 287--301.
[29]
Inderjeet Mani and I Zhang. 2003. kNN approach to unbalanced data distributions: a case study involving information extraction. In Proceedings of workshop on learning from imbalanced datasets, Vol. 126.
[30]
Henry B Mann and Donald R Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics (1947), 50--60.
[31]
Madhuri R. Marri, Tao Xie, Nikolai Tillmann, Jonathan de Halleux, and Wolfram Schulte. 2009. An Empirical Study of Testing File-System-Dependent Software with Mock Objects. In Proceedings of the 4th International Workshop on Automation of Software Test, AST 2009, Vancouver, BC, Canada, May 18--19, 2009. IEEE Computer Society, 149--153.
[32]
Shaikh Mostafa and Xiaoyin Wang. 2014. An Empirical Study on the Usage of Mocking Frameworks in Software Testing. In 2014 14th International Conference on Quality Software, Allen, TX, USA, October 2--3, 2014. 127--132.
[33]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825--2830.
[34]
Sina Shamshiri, René Just, José Miguel Rojas, Gordon Fraser, Phil McMinn, and Andrea Arcuri. 2015. Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges (T). In 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, Lincoln, NE, USA, November 9--13, 2015. IEEE Computer Society, 201--211.
[35]
Davide Spadini, Mauricio Finavaro Aniche, Magiel Bruntink, and Alberto Bacchelli. 2017. To mock or not to mock?: an empirical study on mocking practices. In Proceedings of the 14th International Conference on Mining Software Repositories, MSR 2017, Buenos Aires, Argentina, May 20--28, 2017, Jesús M. González-Barahona, Abram Hindle, and Lin Tan (Eds.). IEEE Computer Society, 402--412.
[36]
Davide Spadini, Maurício Finavaro Aniche, Magiel Bruntink, and Alberto Bacchelli. 2019. Mock objects for testing java systems - Why and how developers use them, and how they evolve. Empirical Software Engineering 24, 3 (2019), 1461--1498.
[37]
Kunal Taneja, Yi Zhang, and Tao Xie. 2010. MODA: automated test generation for database applications via mock objects. In ASE 2010, 25th IEEE/ACM International Conference on Automated Software Engineering, Antwerp, Belgium, September 20--24, 2010, Charles Pecheur, Jamie Andrews, and Elisabetta Di Nitto (Eds.). ACM, 289--292.
[38]
Nikolai Tillmann and Wolfram Schulte. 2006. Mock-object generation with behavior. In 21st IEEE/ACM International Conference on Automated Software Engineering (ASE 2006), 18--22 September 2006, Tokyo, Japan. IEEE Computer Society, 365--368.
[39]
Graham Upton and Ian Cook. 1996. Understanding statistics. Oxford University Press.
[40]
Raja Vallée-Rai, Phong Co, Etienne Gagnon, Laurie J. Hendren, Patrick Lam, and Vijay Sundaresan. 1999. Soot - a Java bytecode optimization framework. In Proceedings of the 1999 conference of the Centre for Advanced Studies on Collaborative Research, November 8--11, 1999, Mississauga, Ontario, Canada. 13. https://dl.acm.org/citation.cfm?id=782008
[41]
Rongxin Wu, Ming Wen, Shing-Chi Cheung, and Hongyu Zhang. 2018. Change-Locator: locate crash-inducing changes based on crash reports. Empir. Softw. Eng. 23, 5 (2018), 2866--2900.
[42]
Harry Zhang. 2004. The Optimality of Naive Bayes. In Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference, Miami Beach, Florida, USA, Valerie Barr and Zdravko Markov (Eds.). AAAI Press, 562--567. http://www.aaai.org/Library/FLAIRS/2004/flairs04-097.php
[43]
Linghao Zhang, Xiaoxing Ma, Jian Lu, Tao Xie, Nikolai Tillmann, and Peli de Halleux. 2012. Environmental Modeling for Automated Cloud Application Testing. IEEE Software 29, 2 (2012), 30--35.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASE '20: Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering
December 2020
1449 pages
ISBN:9781450367684
DOI:10.1145/3324884
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 January 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. dependencies
  2. mocking
  3. recommendation system
  4. unit testing

Qualifiers

  • Research-article

Funding Sources

  • National Natural Science Foundation of China
  • General Research Fund
  • Guangdong Provincial Key Laboratory
  • Hong Kong Research Grant Council
  • WeBank-HKUST Joint Laboratory

Conference

ASE '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 82 of 337 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)84
  • Downloads (Last 6 weeks)11
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media