skip to main content
survey

A Systematic Survey of Just-in-Time Software Defect Prediction

Published: 02 February 2023 Publication History

Abstract

Recent years have experienced sustained focus in research on software defect prediction that aims to predict the likelihood of software defects. Moreover, with the increased interest in continuous deployment, a variant of software defect prediction called Just-in-Time Software Defect Prediction (JIT-SDP) focuses on predicting whether each incremental software change is defective. JIT-SDP is unique in that it consists of two interconnected data streams, one consisting of the arrivals of software changes stemming from design and implementation, and the other the (defective or clean) labels of software changes resulting from quality assurance processes.
We present a systematic survey of 67 JIT-SDP studies with the objective to help researchers advance the state of the art in JIT-SDP and to help practitioners become familiar with recent progress. We summarize best practices in each phase of the JIT-SDP workflow, carry out a meta-analysis of prior studies, and suggest future research directions. Our meta-analysis of JIT-SDP studies indicates, among other findings, that the predictive performance correlates with change defect ratio, suggesting that JIT-SDP is most performant in projects that experience relatively high defect ratios. Future research directions for JIT-SDP include situating each technique into its application domain, reliability-aware JIT-SDP, and user-centered JIT-SDP.

Supplementary Material

3567550-supp (3567550-supp.pdf)
Supplementary material

References

[1]
Pasquale Ardimento, Lerina Aversano, Mario Luca Bernardi, Marta Cimitile, and Martina Iammarino. 2022. Just-in-time software defect prediction using deep temporal convolutional networks. Neural Computing and Applications 34 (2022), 3981–4001.
[2]
Lerina Aversano, Luigi Cerulo, and Concettina Del Grosso. 2007. Learning from bug-introducing changes to prevent fault prone code. In Proceedings of the 9th International Workshop on Principles of Software Evolution: In Conjunction with the 6th ESEC/FSE Joint Meeting. 19–26.
[3]
Jacob G. Barnett, Charles K. Gathuru, Luke S. Soldano, and Shane McIntosh. 2016. The relationship between commit message detail and defect proneness in Java projects on GitHub. In Proceedings of the 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR’16). IEEE, Los Alamitos, CA, 496–499.
[4]
Kwabena E. Bennin, Nauman bin Ali, Jürgen Börstler, and Xiao Yu. 2020. Revisiting the impact of concept drift on just-in-time quality assurance. In Proceedings of the 2020 IEEE 20th International Conference on Software Quality, Reliability, and Security (QRS’20). IEEE, Los Alamitos, CA, 53–59.
[5]
George E. P. Box. 1979. Robustness in the strategy of scientific model building. In Robustness in Statistics. Elsevier, 201–236.
[6]
Leo Breiman. 2001. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science 16, 3 (2001), 199–231.
[7]
George G. Cabral, Leandro L. Minku, Emad Shihab, and Suhaib Mujahid. 2019. Class imbalance evolution and verification latency in just-in-time software defect prediction. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE’19). IEEE, Los Alamitos, CA, 666–676.
[8]
Cagatay Catal. 2011. Software fault prediction: A literature review and current trends. Expert Systems with Applications 38, 4 (2011), 4626–4636.
[9]
Gemma Catolino, Dario Di Nucci, and Filomena Ferrucci. 2019. Cross-project just-in-time bug prediction for mobile apps: An empirical assessment. In Proceedings of the 2019 IEEE/ACM 6th International Conference on Mobile Software Engineering and Systems (MOBILESoft’19). IEEE, Los Alamitos, CA, 99–110.
[10]
Jinyin Chen, Keke Hu, Yue Yu, Zhuangzhi Chen, Qi Xuan, Yi Liu, and Vladimir Filkov. 2020. Software visualization and deep transfer learning for effective software defect prediction. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 578–589.
[11]
Xiang Chen, Yingquan Zhao, Qiuping Wang, and Zhidan Yuan. 2018. MULTI: Multi-objective effort-aware just-in-time software defect prediction. Information and Software Technology 93 (2018), 1–13.
[12]
Daniel Alencar da Costa, Shane McIntosh, Weiyi Shang, Uirá Kulesza, Roberta Coelho, and Ahmed E. Hassan. 2016. A framework for evaluating the results of the SZZ approach for identifying bug-introducing changes. IEEE Transactions on Software Engineering 43, 7 (2016), 641–657.
[13]
Honghui Du, Leandro L. Minku, and Huiyu Zhou. 2019. Multi-source transfer learning for non-stationary environments. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN’19). IEEE, Los Alamitos, CA, 1–8.
[14]
Ruifeng Duan, Haitao Xu, Yuanrui Fan, and Meng Yan. 2022. The impact of duplicate changes on just-in-time defect prediction. IEEE Transactions on Reliability 71, 3 (2022), 1294–1308.
[15]
Yuanrui Fan, Xin Xia, Daniel Alencar da Costa, David Lo, Ahmed E. Hassan, and Shanping Li. 2021. The impact of changes mislabeled by SZZ on just-in-time defect prediction. IEEE Transactions on Software Engineering 47, 8 (2021), 1559–1586.
[16]
Norman E. Fenton and Martin Neil. 1999. A critique of software defect prediction models. IEEE Transactions on Software Engineering 25, 5 (1999), 675–689.
[17]
Wei Fu and Tim Menzies. 2017. Revisiting unsupervised learning for defect prediction. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. 72–83.
[18]
Takafumi Fukushima, Yasutaka Kamei, Shane McIntosh, Kazuhiro Yamashita, and Naoyasu Ubayashi. 2014. An empirical study of just-in-time defect prediction using cross-project models. In Proceedings of the 11th Working Conference on Mining Software Repositories. 172–181.
[19]
Jiri Gesi, Jiawei Li, and Iftekhar Ahmed. 2021. An empirical examination of the impact of bias on just-in-time defect prediction. In Proceedings of the 15th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM’21). 1–12.
[20]
Trisha Greenhalgh and Richard Peacock. 2005. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: Audit of primary sources. BMJ 331, 7524 (2005), 1064–1065.
[21]
Tracy Hall, Sarah Beecham, David Bowes, David Gray, and Steve Counsell. 2011. A systematic literature review on fault prediction performance in software engineering. IEEE Transactions on Software Engineering 38, 6 (2011), 1276–1304.
[22]
Maggie Hamill and Katerina Goseva-Popstojanova. 2009. Common trends in software fault and failure data. IEEE Transactions on Software Engineering 35, 4 (2009), 484–496.
[23]
Kim Herzig, Sascha Just, and Andreas Zeller. 2016. The impact of tangled code changes on defect prediction models. Empirical Software Engineering 21, 2 (2016), 303–336.
[24]
Thong Hoang, Hoa Khanh Dam, Yasutaka Kamei, David Lo, and Naoyasu Ubayashi. 2019. DeepJIT: An end-to-end deep learning framework for just-in-time defect prediction. In Proceedings of the 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR’19). IEEE, Los Alamitos, CA, 34–45.
[25]
Thong Hoang, Hong Jin Kang, David Lo, and Julia Lawall. 2020. CC2Vec: Distributed representations of code changes. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 518–529.
[26]
Seyedrebvar Hosseini, Burak Turhan, and Dimuthu Gunarathna. 2017. A systematic literature review and meta-analysis on cross project defect prediction. IEEE Transactions on Software Engineering 45, 2 (2017), 111–147.
[27]
Qiao Huang, Xin Xia, and David Lo. 2017. Supervised vs unsupervised models: A holistic look at effort-aware just-in-time defect prediction. In Proceedings of the 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME’17). IEEE, Los Alamitos, CA, 159–170.
[28]
Qiao Huang, Xin Xia, and David Lo. 2019. Revisiting supervised and unsupervised models for effort-aware just-in-time defect prediction. Empirical Software Engineering 24, 5 (2019), 2823–2862.
[29]
Tian Jiang, Lin Tan, and Sunghun Kim. 2013. Personalized defect prediction. In Proceedings of the 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE’13). IEEE, Los Alamitos, CA, 279–289.
[30]
Yasutaka Kamei, Takafumi Fukushima, Shane McIntosh, Kazuhiro Yamashita, Naoyasu Ubayashi, and Ahmed E. Hassan. 2016. Studying just-in-time defect prediction using cross-project models. Empirical Software Engineering 21, 5 (2016), 2072–2106.
[31]
Yasutaka Kamei and Emad Shihab. 2016. Defect prediction: Accomplishments and future challenges. In Proceedings of the 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER’16), Vol. 5. IEEE, Los Alamitos, CA, 33–45.
[32]
Yasutaka Kamei, Emad Shihab, Bram Adams, Ahmed E. Hassan, Audris Mockus, Anand Sinha, and Naoyasu Ubayashi. 2012. A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering 39, 6 (2012), 757–773.
[33]
Jonggu Kang, Duksan Ryu, and Jongmoon Baik. 2021. Predicting just-in-time software defects to reduce post-release quality costs in the maritime industry. Software: Practice and Experience 51, 4 (2021), 748–771.
[34]
Chaiyakarn Khanan, Worawit Luewichana, Krissakorn Pruktharathikoon, Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, Morakot Choetkiertikul, Chaiyong Ragkhitwetsagul, and Thanwadee Sunetnanta. 2020. JITBot: An explainable just-in-time defect prediction bot. In Proceedings of the 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE’20). IEEE, Los Alamitos, CA, 1336–1339.
[35]
Sunghun Kim, E. James Whitehead, and Yi Zhang. 2008. Classifying software changes: Clean or buggy? IEEE Transactions on Software Engineering 34, 2 (2008), 181–196.
[36]
Sunghun Kim, Thomas Zimmermann, Kai Pan, and E. James Whitehead Jr. 2006. Automatic identification of bug-introducing changes. In Proceedings of the 21st IEEE/ACM International Conference on Automated Software Engineering (ASE’06). IEEE, Los Alamitos, CA, 81–90.
[37]
Barbara Kitchenham and Stuart Charters. 2007. Guidelines for Performing Systematic Literature Reviews in Software Engineering. Technical Report EBSE-2007-01. School of Computer Science and Mathematics, Keele University.
[38]
Barbara Kitchenham, O. Pearl Brereton, David Budgen, Mark Turner, John Bailey, and Stephen Linkman. 2009. Systematic literature reviews in software engineering—A systematic literature review. Information and Software Technology 51, 1 (2009), 7–15. DOI:
[39]
Barbara Ann Kitchenham, David Budgen, and Pearl Brereton. 2015. Evidence-Based Software Engineering and Systematic Reviews. Vol. 4. CRC Press, Boca Raton, FL.
[40]
Masanari Kondo, Daniel M. German, Osamu Mizuno, and Eun-Hye Choi. 2020. The impact of context metrics on just-in-time defect prediction. Empirical Software Engineering 25, 1 (2020), 890–939.
[41]
Ning Li, Martin Shepperd, and Yuchen Guo. 2020. A systematic review of unsupervised learning techniques for software defect prediction. Information and Software Technology 122 (2020), 106287.
[42]
Weiwei Li, Wenzhou Zhang, Xiuyi Jia, and Zhiqiu Huang. 2020. Effort-aware semi-supervised just-in-time defect prediction. Information and Software Technology 126 (2020), 106364.
[43]
Zhiqiang Li, Xiao-Yuan Jing, and Xiaoke Zhu. 2018. Progress on approaches to software defect prediction. IET Software 12, 3 (2018), 161–175.
[44]
Dayi Lin, Chakkrit Tantithamthavorn, and Ahmed E. Hassan. 2022. The impact of data merging on the interpretation of cross-project just-in-time defect models. IEEE Transactions on Software Engineering 48, 8 (2022), 2969–2986.
[45]
Jinping Liu, Yuming Zhou, Yibiao Yang, Hongmin Lu, and Baowen Xu. 2017. Code churn: A neglected metric in effort-aware just-in-time defect prediction. In Proceedings of the 2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM’17). IEEE, Los Alamitos, CA, 11–19.
[46]
Ruchika Malhotra. 2015. A systematic review of machine learning techniques for software fault prediction. Applied Soft Computing 27 (2015), 504–518.
[47]
Shane McIntosh and Yasutaka Kamei. 2017. Are fix-inducing changes a moving target? A longitudinal case study of just-in-time defect prediction. IEEE Transactions on Software Engineering 44, 5 (2017), 412–428.
[48]
Tim Menzies, Andrew Butcher, David Cok, Andrian Marcus, Lucas Layman, Forrest Shull, Burak Turhan, and Thomas Zimmermann. 2012. Local versus global lessons for defect prediction and effort estimation. IEEE Transactions on Software Engineering 39, 6 (2012), 822–834.
[49]
Audris Mockus and David M. Weiss. 2000. Predicting risk of software changes. Bell Labs Technical Journal 5, 2 (2000), 169–180.
[50]
Kevin Moran, David N. Palacio, Carlos Bernal-Cárdenas, Daniel McCrystal, Denys Poshyvanyk, Chris Shenefiel, and Jeff Johnson. 2020. Improving the effectiveness of traceability link recovery using hierarchical Bayesian networks. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 873–885.
[51]
Keita Mori and Osamu Mizuno. 2015. An implementation of just-in-time fault-prone prediction technique using text classifier. In Proceedings of the 2015 IEEE 39th Annual Computer Software and Applications Conference, Vol. 3. IEEE, Los Alamitos, CA, 609–612.
[52]
Edmilson Campos Neto, Daniel Alencar da Costa, and Uirá Kulesza. 2018. The impact of refactoring changes on the SZZ algorithm: An empirical study. In Proceedings of the 2018 IEEE 25th International Conference on Software Analysis, Evolution, and Reengineering (SANER’18). IEEE, Los Alamitos, CA, 380–390.
[53]
Yusuf Sulistyo Nugroho, Hideaki Hata, and Kenichi Matsumoto. 2020. How different are different diff algorithms in Git? Empirical Software Engineering 25, 1 (2020), 790–823.
[54]
Matheus Paixao, Jens Krinke, Donggyun Han, and Mark Harman. 2018. CROP: Linking code reviews to source code changes. In Proceedings of the 15th International Conference on Mining Software Repositories. 46–49.
[55]
Luca Pascarella, Fabio Palomba, and Alberto Bacchelli. 2019. Fine-grained just-in-time defect prediction. Journal of Systems and Software 150 (2019), 22–36.
[56]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, et al. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12 (2011), 2825–2830.
[57]
Chanathip Pornprasit and Chakkrit Tantithamthavorn. 2021. JITLine: A simpler, better, faster, finer-grained just-in-time defect prediction. arXiv preprint arXiv:2103.07068 (2021).
[58]
Lei Qiao and Yan Wang. 2019. Effort-aware and just-in-time defect prediction with neural network. PloS One 14, 2 (2019), e0211359.
[59]
Sophia Quach, Maxime Lamothe, Bram Adams, Yasutaka Kamei, and Weiyi Shang. 2021. Evaluating the impact of falsely detected performance bug-inducing changes in JIT models. Empirical Software Engineering 26, 5 (2021), 1–32.
[60]
Danijel Radjenović, Marjan Heričko, Richard Torkar, and Aleš Živkovič. 2013. Software fault prediction metrics: A systematic literature review. Information and Software Technology 55, 8 (2013), 1397–1418.
[61]
Foyzur Rahman, Sameer Khatri, Earl T. Barr, and Premkumar Devanbu. 2014. Comparing static bug finders and statistical prediction. In Proceedings of the 36th International Conference on Software Engineering. 424–434.
[62]
Zhi-Yong Ran and Bao-Gang Hu. 2017. Parameter identifiability in statistical machine learning: A review. Neural Computation 29, 5 (2017), 1151–1203.
[63]
Gema Rodriguez-Perez, Meiyappan Nagappan, and Gregorio Robles. 2022. Watch out for extrinsic bugs! A case study of their impact in just-in-time bug prediction models on the OpenStack project. IEEE Transactions on Software Engineering 48, 4 (2022), 1400–1416.
[64]
Giovanni Rosa, Luca Pascarella, Simone Scalabrino, Rosalia Tufano, Gabriele Bavota, Michele Lanza, and Rocco Oliveto. 2021. Evaluating SZZ implementations through a developer-informed oracle. In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE’21). IEEE, Los Alamitos, CA.
[65]
Felix Salfner, Maren Lenk, and Miroslaw Malek. 2010. A survey of online failure prediction methods. ACM Computing Surveys 42, 3 (2010), 1–42.
[66]
SEI Authors. 2016. SEI CERT C Coding Standard: Rules for Developing Safe, Reliable, and Secure Systems. Software Engineering Institute, Carnegie Mellon University.
[67]
Jacek Śliwerski, Thomas Zimmermann, and Andreas Zeller. 2005. When do changes induce fixes? ACM SIGSOFT Software Engineering Notes 30, 4 (2005), 1–5.
[68]
Sadia Tabassum, Leandro L. Minku, Danyi Feng, George G. Cabral, and Liyan Song. 2020. An investigation of cross-project learning in online just-in-time software defect prediction. In Proceedings of the 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE’20). IEEE, Los Alamitos, CA, 554–565.
[69]
Ming Tan, Lin Tan, Sashank Dara, and Caleb Mayeux. 2015. Online defect prediction for imbalanced data. In Proceedings of the 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 2. IEEE, Los Alamitos, CA, 99–108.
[70]
Chakkrit Tantithamthavorn and Ahmed E. Hassan. 2018. An experience report on defect modelling in practice: Pitfalls and challenges. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice. 286–295.
[71]
Alexander Tarvo, Nachiappan Nagappan, Thomas Zimmermann, Thirumalesh Bhat, and Jacek Czerwonka. 2013. Predicting risk of pre-release code changes with Checkinmentor. In Proceedings of the 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE’13). IEEE, Los Alamitos, CA, 128–137.
[72]
Hailemelekot Demtse Tessema and Surafel Lemma Abebe. 2021. Enhancing just-in-time defect prediction using change request-based metrics. In Proceedings of the 2021 IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER’21). IEEE, Los Alamitos, CA, 511–515.
[73]
Yuli Tian, Ning Li, Jeff Tian, and Wei Zheng. 2020. How well just-in-time defect prediction techniques enhance software reliability? In Proceedings of the 2020 IEEE 20th International Conference on Software Quality, Reliability, and Security (QRS’20). IEEE, Los Alamitos, CA, 212–221.
[74]
Parastou Tourani and Bram Adams. 2016. The impact of human discussions on just-in-time quality assurance: An empirical study on OpenStack and Eclipse. In Proceedings of the 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER’16), Vol. 1. IEEE, Los Alamitos, CA, 189–200.
[75]
Alexander Trautsch, Steffen Herbold, and Jens Grabowski. 2020. Static source code metrics and static analysis warnings for fine-grained just-in-time defect prediction. In Proceedings of the 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME’20). IEEE, Los Alamitos, CA, 127–138.
[76]
Zhiyuan Wan, Xin Xia, Ahmed E. Hassan, David Lo, Jianwei Yin, and Xiaohu Yang. 2018. Perceptions, expectations, and challenges in defect prediction. IEEE Transactions on Software Engineering 46, 11 (2018), 1241–1266.
[77]
Ning Xie, Gabrielle Ras, Marcel van Gerven, and Derek Doran. 2020. Explainable deep learning: A field guide for the uninitiated. arXiv preprint arXiv:2004.14545 (2020).
[78]
Zhou Xu, Kunsong Zhao, Tao Zhang, Chunlei Fu, Meng Yan, Zhiwen Xie, Xiaohong Zhang, and Gemma Catolino. 2022. Effort-aware just-in-time bug prediction for mobile apps via cross-triplet deep feature embedding. IEEE Transactions on Reliability 71, 1 (2022), 204–220.
[79]
Meng Yan, Xin Xia, Yuanrui Fan, Ahmed E. Hassan, David Lo, and Shanping Li. 2022. Just-in-time defect identification and localization: A two-phase framework. IEEE Transactions on Software Engineering 48, 1 (2022), 82–101.
[80]
Meng Yan, Xin Xia, Yuanrui Fan, David Lo, Ahmed E. Hassan, and Xindong Zhang. 2020. Effort-aware just-in-time defect identification in practice: A case study at Alibaba. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1308–1319.
[81]
Limin Yang, Xiangxue Li, and Yu Yu. 2017. VulDigger: A just-in-time and cost-aware tool for digging vulnerability-contributing changes. In Proceedings of the 2017 IEEE Global Communications Conference (GLOBECOM’17). IEEE, Los Alamitos, CA, 1–7.
[82]
Xinli Yang, David Lo, Xin Xia, and Jianling Sun. 2017. TLEL: A two-layer ensemble learning approach for just-in-time defect prediction. Information and Software Technology 87 (2017), 206–220.
[83]
Xinli Yang, David Lo, Xin Xia, Yun Zhang, and Jianling Sun. 2015. Deep learning for just-in-time defect prediction. In Proceedings of the 2015 IEEE International Conference on Software Quality, Reliability, and Security. IEEE, Los Alamitos, CA, 17–26.
[84]
Xingguang Yang, Huiqun Yu, Guisheng Fan, Kai Shi, and Liqiong Chen. 2019. Local versus global models for just-in-time software defect prediction. Scientific Programming 2019 (2019), 2384706.
[85]
Xingguang Yang, Huiqun Yu, Guisheng Fan, and Kang Yang. 2020. A differential evolution-based approach for effort-aware just-in-time software defect prediction. In Proceedings of the 1st ACM SIGSOFT International Workshop on Representation Learning for Software Engineering and Program Languages. 13–16.
[86]
Yibiao Yang, Yuming Zhou, Jinping Liu, Yangyang Zhao, Hongmin Lu, Lei Xu, Baowen Xu, and Hareton Leung. 2016. Effort-aware just-in-time defect prediction: simple unsupervised models could be better than supervised models. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. 157–168.
[87]
Steven Young, Tamer Abdou, and Ayse Bener. 2018. A replication study: Just-in-time defect prediction with ensemble learning. In Proceedings of the 6th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering. 42–47.
[88]
Andreas Zeller, Thomas Zimmermann, and Christian Bird. 2011. Failure is a four-letter word: A parody in empirical research. In Proceedings of the 7th International Conference on Predictive Models in Software Engineering. 1–7.
[89]
Zhengran Zeng, Yuqun Zhang, Haotian Zhang, and Lingming Zhang. 2021. Deep just-in-time defect prediction: How far are we? In Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis. 427–438.
[90]
Feng Zhang, Audris Mockus, Iman Keivanloo, and Ying Zou. 2016. Towards building a universal defect prediction model with rank transformed predictors. Empirical Software Engineering 21, 5 (2016), 2107–2145.
[91]
Wenzhou Zhang, Weiwei Li, and Xiuyi Jia. 2019. Effort-aware tri-training for semi-supervised just-in-time defect prediction. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining. 293–304.
[92]
Kunsong Zhao, Zhou Xu, Meng Yan, Lei Xue, Wei Li, and Gemma Catolino. 2021. A compositional model for effort-aware just-in-time defect prediction on Android apps. IET Software 16, 5 (2021), 1–20.
[93]
K. Zhao, Z. Xu, T. Zhang, Y. Tang, and M. Yan. 2021. Simplified deep forest model based just-in-time defect prediction for Android mobile apps. IEEE Transactions on Reliability 70, 2 (2021), 848–859. DOI:
[94]
Kun Zhu, Nana Zhang, Shi Ying, and Dandan Zhu. 2020. Within-project and cross-project just-in-time defect prediction based on denoising autoencoder and convolutional neural network. IET Software 14, 3 (2020), 185–195.
[95]
Xiaoyan Zhu, Binbin Niu, E. James Whitehead Jr., and Zhongbin Sun. 2018. An empirical study of software change classification with imbalance data-handling methods. Software: Practice and Experience 48, 11 (2018), 1968–1999.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 55, Issue 10
October 2023
772 pages
ISSN:0360-0300
EISSN:1557-7341
DOI:10.1145/3567475
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 February 2023
Online AM: 17 October 2022
Accepted: 23 August 2022
Revised: 06 May 2022
Received: 03 June 2021
Published in CSUR Volume 55, Issue 10

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Software defect prediction
  2. release software defect prediction
  3. just-in-time software defect prediction
  4. change-level software defect prediction
  5. machine learning
  6. searching-based algorithms
  7. software change metrics
  8. change defect density

Qualifiers

  • Survey
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,797
  • Downloads (Last 6 weeks)92
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media