skip to main content
research-article

Exploring the Impact of Code Clones on Deep Learning Software

Published: 28 September 2023 Publication History

Abstract

Deep learning (DL) is a really active topic in recent years. Code cloning is a common code implementation that could negatively impact software maintenance. For DL software, developers rely heavily on frameworks to implement DL features. Meanwhile, to guarantee efficiency, developers often reuse the steps and configuration settings for building DL models. These may bring code copy-pastes or reuses inducing code clones. However, there is little work exploring code clones’ impact on DL software. In this article, we conduct an empirical study and show that: (1) code clones are prevalent in DL projects, about 16.3% of code fragments encounter clones, which is almost twice larger than the traditional projects; (2) 75.6% of DL projects contain co-changed clones, meaning changes are propagated among cloned fragments, which can bring maintenance difficulties; (3) Percentage of the clones and Number of clone lines are associated with the emergence of co-changes; (4) the prevalence of Code clones varies in DL projects with different frameworks, but the difference is not significant; (5) Type 1 co-changed clones often spread over different folders, but Types 2 and 3 co-changed clones mainly occur within the same files or folders; (6) 57.1% of all co-changed clones are involved in bugs.

References

[1]
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16), Kimberly Keeton and Timothy Roscoe (Eds.). USENIX Association, 265–283. Retrieved from https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi.
[2]
Lerina Aversano, Luigi Cerulo, and Massimiliano Di Penta. 2007. How clones are maintained: An empirical study. In Proceedings of the 11th European Conference on Software Maintenance and Reengineering, Software Evolution in Complex Software Intensive Systems (CSMR’07), René L. Krikhaar, Chris Verhoef, and Giuseppe A. Di Lucca (Eds.). IEEE Computer Society, 81–90. DOI:
[3]
Jacob G. Barnett, Charles K. Gathuru, Luke S. Soldano, and Shane McIntosh. 2016. The relationship between commit message detail and defect proneness in Java projects on GitHub. In Proceedings of the 13th International Conference on Mining Software Repositories (MSR’16), Miryung Kim, Romain Robbes, and Christian Bird (Eds.). ACM, 496–499. DOI:
[4]
Jean Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Retrieved from http://arxiv.org/abs/cmp-lg/9602004.
[5]
Zhifei Chen, Lin Chen, Wanwangying Ma, Xiaoyu Zhou, Yuming Zhou, and Baowen Xu. 2018. Understanding metric-based detectable smells in Python software: A comparative study. Info. Softw. Technol. 94 (2018), 14–29. DOI:
[6]
Zhenpeng Chen, Huihan Yao, Yiling Lou, Yanbin Cao, Yuanqiang Liu, Haoyu Wang, and Xuanzhe Liu. 2021. An Empirical Study on Deployment Faults of Deep Learning Based Mobile Applications. IEEE Press, 674–685.
[7]
Mihai Cristian Chirodea, Ovidiu-Constantin Novac, Cornelia Mihaela Novac, Nicu Bizon, Mihai Oproescu, and Cornelia Emilia Gordan. 2021. Comparison of tensorflow and PyTorch in convolutional neural network-based applications. In Proceedings of the 13th International Conference on Electronics, Computers and Artificial Intelligence (ECAI’21). IEEE, 1–6. DOI:
[8]
Hulin Dai, Xuan Peng, Xuanhua Shi, Ligang He, Qian Xiong, and Hai Jin. 2022. Reveal training performance mystery between TensorFlow and PyTorch in the single GPU environment. Sci. China Info. Sci. 65, 1 (2022). DOI:
[9]
Neil Davey, Paul Barson, Simon Field, Ray Frank, and D. Tansley. 1995. The development of a software clone detector. Int. J. Appl. Softw. Technol. 1, 3/4 (1995), 219-236.
[10]
Elder Vicente de Paulo Sobrinho, Andrea De Lucia, and Marcelo de Almeida Maia. 2021. A systematic literature review on bad smells-5 W’s: Which, when, what, who, where. IEEE Trans. Softw. Eng. 47, 1 (2021), 17–66. DOI:
[11]
Martin Fowler. 2002. Refactoring: Improving the design of existing code. In Proceedings of the Extreme Programming and Agile Methods-XP/Agile Universe 2002, 2nd XP Universe and 1st Agile Universe Conference(Lecture Notes in Computer Science, Vol. 2418), Don Wells and Laurie A. Williams (Eds.). Springer, 256. DOI:
[12]
Lobna Ghadhab, Ilyes Jenhani, Mohamed Wiem Mkaouer, and Montassar Ben Messaoud. 2021. Augmenting commit classification by using fine-grained source code changes and a pre-trained deep neural language model. Info. Softw. Technol. 135 (2021), 106566. DOI:
[13]
Cristina Heghedus, Antorweep Chakravorty, and Chunming Rong. 2019. Neural network frameworks. Comparison on public transportation prediction. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW’19). IEEE, 842–849. DOI:
[14]
Abram Hindle, Daniel M. Germán, and Richard C. Holt. 2008. What do large commits tell us?: A taxonomical study of large commits. In Proceedings of the International Working Conference on Mining Software Repositories (MSR’08) (Co-located with ICSE), Ahmed E. Hassan, Michele Lanza, and Michael W. Godfrey (Eds.). ACM, 99–108. DOI:
[15]
Judith F. Islam, Manishankar Mondal, and Chanchal K. Roy. 2016. Bug replication in code clones: An empirical study. In Proceedings of the IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER’16). IEEE Computer Society, 68–78. DOI:
[16]
Judith F. Islam, Manishankar Mondal, and Chanchal K. Roy. 2019. A comparative study of software bugs in micro-clones and regular code clones. In Proceedings of the 26th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER’19), Xinyu Wang, David Lo, and Emad Shihab (Eds.). IEEE, 73–83. DOI:
[17]
Judith F. Islam, Manishankar Mondal, Chanchal K. Roy, and Kevin A. Schneider. 2019. Comparing bug replication in regular and micro code clones. In Proceedings of the 27th International Conference on Program Comprehension (ICPC’19), Yann-Gaël Guéhéneuc, Foutse Khomh, and Federica Sarro (Eds.). IEEE/ACM, 81–92. DOI:
[18]
Md. Johirul Islam, Giang Nguyen, Rangeet Pan, and Hridesh Rajan. 2019. A comprehensive study on deep learning bug characteristics. In Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/SIGSOFT FSE’19), Marlon Dumas, Dietmar Pfahl, Sven Apel, and Alessandra Russo (Eds.). ACM, 510–520. DOI:
[19]
Hadhemi Jebnoun, Houssem Ben Braiek, Mohammad Masudur Rahman, and Foutse Khomh. 2020. The Scent of Deep Learning Code: An Empirical Study. Association for Computing Machinery, New York, NY, 420–430.
[20]
Hadhemi Jebnoun, Md. Saidur Rahman, Foutse Khomh, and Biruk Asmare Muse. 2022. Clones in deep learning code: What, where, and why? Empir. Softw. Eng. 27, 4 (2022), 84. DOI:
[21]
Li Jia, Hao Zhong, Xiaoyin Wang, Linpeng Huang, and Xuansheng Lu. 2020. An empirical study on bugs inside TensorFlow. In Proceedings of the 25th International Conference on Database Systems for Advanced Applications. Springer-Verlag, Berlin, 604–620. DOI:
[22]
Lingxiao Jiang, Ghassan Misherghi, Zhendong Su, and Stéphane Glondu. 2007. DECKARD: Scalable and accurate tree-based detection of code clones. In Proceedings of the 29th International Conference on Software Engineering (ICSE’07). IEEE Computer Society, 96–105. DOI:
[23]
Lingxiao Jiang, Zhendong Su, and Edwin Chiu. 2007. Context-based detection of clone-related bugs. In Proceedings of the 6th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Ivica Crnkovic and Antonia Bertolino (Eds.). ACM, 55–64. DOI:
[24]
Elmar Jürgens. 2011. Why and how to control cloning in software artifacts. Ph.D. Dissertation. Technical University Munich. Retrieved from http://mediatum.ub.tum.de/node?id=997705.
[25]
Elmar Jürgens, Florian Deissenboeck, Benjamin Hummel, and Stefan Wagner. 2009. Do code clones matter? In Proceedings of the 31st International Conference on Software Engineering (ICSE’09). IEEE, 485–495. DOI:
[26]
Toshihiro Kamiya, Shinji Kusumoto, and Katsuro Inoue. 2002. CCFinder: A multilinguistic token-based code clone detection system for large scale source code. IEEE Trans. Software Eng. 28, 7 (2002), 654–670. DOI:
[27]
Cory Kapser and Michael W. Godfrey. 2003. Toward a taxonomy of clones in source code: A case study. Evol. Large Scale Industr. Softw. Architect. 16 (2003), 107–113.
[28]
Cory J. Kapser and Michael W. Godfrey. 2008. “Cloning considered harmful” considered harmful: Patterns of cloning in software. Empir. Softw. Eng. 13, 6 (2008), 645–692.
[29]
Miryung Kim, Vibha Sazawal, David Notkin, and Gail C. Murphy. 2005. An empirical study of code clone genealogies. In Proceedings of the 10th European Software Engineering Conference held jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering, Michel Wermelinger and Harald C. Gall (Eds.). ACM, 187–196. DOI:
[30]
Seulbae Kim, Seunghoon Woo, Heejo Lee, and Hakjoo Oh. 2017. VUDDY: A scalable approach for vulnerable code clone discovery. In Proceedings of the IEEE Symposium on Security and Privacy (SP’17). IEEE Computer Society, 595–614. DOI:
[31]
Gerwin Klein, June Andronick, Kevin Elphinstone, Toby C. Murray, Thomas Sewell, Rafal Kolanski, and Gernot Heiser. 2014. Comprehensive formal verification of an OS microkernel. ACM Trans. Comput. Syst. 32, 1 (2014), 2:1–2:70. DOI:
[32]
Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock, Philip Derrin, Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood. 2009. seL4: Formal verification of an OS kernel. In Proceedings of the 22nd ACM Symposium on Operating Systems Principles (SOSP’09), Jeanna Neefe Matthews and Thomas E. Anderson (Eds.). ACM, 207–220. DOI:
[33]
Raghavan Komondoor and Susan Horwitz. 2001. Using slicing to identify duplication in source code. In Proceedings of the 8th International Symposium on Statistical Analysis (SAS’01) (Lecture Notes in Computer Science, Vol. 2126), Patrick Cousot (Ed.). Springer, 40–56. DOI:
[34]
Jens Krinke. 2001. Identifying similar code with program dependence graphs. In Proceedings of the 8th Working Conference on Reverse Engineering (WCRE’01), Elizabeth Burd, Peter Aiken, and Rainer Koschke (Eds.). IEEE Computer Society, 301–309. DOI:
[35]
Liuqing Li, He Feng, Wenjie Zhuang, Na Meng, and Barbara G. Ryder. 2017. CCLearner: A deep learning-based clone detection approach. In Proceedings of the IEEE International Conference on Software Maintenance and Evolution (ICSME’17). IEEE Computer Society, 249–260. DOI:
[36]
Tsung-Yi Lin, Piotr Dollár, Ross B. Girshick, Kaiming He, Bharath Hariharan, and Serge J. Belongie. 2017. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE Computer Society, 936–944. DOI:
[37]
Simone Livieri, Yoshiki Higo, Makoto Matsushita, and Katsuro Inoue. 2007. Very-large scale code clone analysis and visualization of open source programs using distributed CCFinder: D-CCFinder. In Proceedings of the 29th International Conference on Software Engineering (ICSE’07). IEEE Computer Society, 106–115. DOI:
[38]
Angela Lozano and Michel Wermelinger. 2008. Assessing the effect of clones on changeability. In Proceedings of the 24th IEEE International Conference on Software Maintenance (ICSM’08). IEEE Computer Society, 227–236. DOI:
[39]
Angela Lozano and Michel Wermelinger. 2010. Tracking clones’ imprint. In Proceeding of the 4th ICSE International Workshop on Software Clones (IWSC’10), Katsuro Inoue, Stanislaw Jarzabek, Rainer Koschke, and James R. Cordy (Eds.). ACM, 65–72. DOI:
[40]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. Retrieved from http://arxiv.org/abs/1706.06083.
[41]
Zoltán Ádám Mann. 2006. Three public enemies: Cut, copy, and paste. Computer 39, 7 (2006), 31–35. DOI:
[42]
Ran Mo, Yang Zhao, Qiong Feng, and Zengyang Li. 2021. The existence and co-modifications of code clones within or across microservices. In Proceedings of the ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM’21), Filippo Lanubile, Marcos Kalinowski, and Maria Teresa Baldassarre (Eds.). ACM, 22:1–22:11. DOI:
[43]
Manishankar Mondal, Banani Roy, Chanchal K. Roy, and Kevin A. Schneider. 2019. Investigating context adaptation bugs in code clones. In Proceedings of the IEEE International Conference on Software Maintenance and Evolution (ICSME’19). IEEE, 157–168. DOI:
[44]
Manishankar Mondal, Banani Roy, Chanchal K. Roy, and Kevin A. Schneider. 2020. Associating code clones with association rules for change impact analysis. In Proceedings of the 27th IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER’20), Kostas Kontogiannis, Foutse Khomh, Alexander Chatzigeorgiou, Marios-Eleftherios Fokaefs, and Minghui Zhou (Eds.). IEEE, 93–103. DOI:
[45]
Manishankar Mondal, Chanchal K. Roy, Md. Saidur Rahman, Ripon K. Saha, Jens Krinke, and Kevin A. Schneider. 2012. Comparative stability of cloned and non-cloned code: An empirical study. In Proceedings of the ACM Symposium on Applied Computing (SAC’12), Sascha Ossowski and Paola Lecca (Eds.). ACM, 1227–1234. DOI:
[46]
Manishankar Mondal, Chanchal K. Roy, and Kevin A. Schneider. 2015. A comparative study on the bug-proneness of different types of code clones. In Proceedings of the IEEE International Conference on Software Maintenance and Evolution (ICSME’15), Rainer Koschke, Jens Krinke, and Martin P. Robillard (Eds.). IEEE Computer Society, 91–100. DOI:
[47]
Manishankar Mondal, Chanchal K. Roy, and Kevin A. Schneider. 2017. Does cloned code increase maintenance effort? In Proceedings of the 11th IEEE International Workshop on Software Clones (IWSC’17), Nicholas A. Kraft, Michael W. Godfrey, and Hitesh Sajnani (Eds.). IEEE Computer Society, 38–44. DOI:
[48]
Manziba Akanda Nishi and Kostadin Damevski. 2018. Scalable code clone detection and search based on adaptive prefix filtering. J. Syst. Softw. 137 (2018), 130–142. DOI:
[49]
Nicolas Palix, Gaël Thomas, Suman Saha, Christophe Calvès, Julia Lawall, and Gilles Muller. 2011. Faults in linux: Ten years later. In Proceedings of the 16th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’11), Rajiv Gupta and Todd C. Mowry (Eds.). ACM, 305–318. DOI:
[50]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 32 (NeurIPS’19), Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 8024–8035. Retrieved from https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html.
[51]
Md. Saidur Rahman and Chanchal K. Roy. 2014. A change-type-based empirical study on the stability of cloned code. In Proceedings of the 14th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM’14). IEEE Computer Society, 31–40. DOI:
[52]
Md. Saidur Rahman and Chanchal K. Roy. 2017. On the relationships between stability and bug-proneness of code clones: An empirical study. In Proceedings of the 17th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM’17). IEEE Computer Society, 131–140. DOI:
[53]
Maziar Raissi, Paris Perdikaris, and George E. Karniadakis. 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378 (2019), 686–707. DOI:
[54]
Dhavleesh Rattan, Rajesh Kumar Bhatia, and Maninder Singh. 2013. Software clone detection: A systematic review. Inf. Softw. Technol. 55, 7 (2013), 1165–1199. DOI:
[55]
Christoffer Rosen, Ben Grawi, and Emad Shihab. 2015. Commit guru: Analytics and risk prediction of software commits. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE’15), Elisabetta Di Nitto, Mark Harman, and Patrick Heymans (Eds.). ACM, 966–969. DOI:
[56]
Chanchal Kumar Roy and James R. Cordy. 2008. NICAD: Accurate detection of near-miss intentional clones using flexible pretty-printing and code normalization. In Proceedings of the 16th IEEE International Conference on Program Comprehension (ICPC’08), René L. Krikhaar, Ralf Lämmel, and Chris Verhoef (Eds.). IEEE Computer Society, 172–181. DOI:
[57]
Chanchal Kumar Roy, James R. Cordy, and Rainer Koschke. 2009. Comparison and evaluation of code clone detection techniques and tools: A qualitative approach. Sci. Comput. Program. 74, 7 (2009), 470–495. DOI:
[58]
Vaibhav Saini, Farima Farmahinifarahani, Yadong Lu, Pierre Baldi, and Cristina V. Lopes. 2018. Oreo: Detection of clones in the twilight zone. In Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/SIGSOFT FSE’18), Gary T. Leavens, Alessandro Garcia, and Corina S. Pasareanu (Eds.). ACM, 354–365. DOI:
[59]
Hitesh Sajnani, Vaibhav Saini, Jeffrey Svajlenko, Chanchal K. Roy, and Cristina V. Lopes. 2016. SourcererCC: Scaling code clone detection to big-code. In Proceedings of the 38th International Conference on Software Engineering (ICSE’16), Laura K. Dillon, Willem Visser, and Laurie A. Williams (Eds.). ACM, 1157–1168. DOI:
[60]
D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-François Crespo, and Dan Dennison. 2015. Hidden technical debt in machine learning systems. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 28, Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett (Eds.). 2503–2511. Retrieved from https://proceedings.neurips.cc/paper/2015/hash/86df7dcfd896fcaf2674f757a2463eba-Abstract.html.
[61]
Frank Seide and Amit Agarwal. 2016. CNTK: Microsoft’s open-source deep-learning toolkit. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi (Eds.). ACM, 2135. DOI:
[62]
Jeffrey Svajlenko and Chanchal K. Roy. 2014. Evaluating modern clone detection tools. In Proceedings of the IEEE International Conference on Software Maintenance and Evolution. IEEE, 321–330.
[63]
Jeffrey Svajlenko and Chanchal Kumar Roy. 2014. Evaluating modern clone detection tools. In Proceedings of the 30th IEEE International Conference on Software Maintenance and Evolution (ICSME’14). IEEE Computer Society, 321–330. DOI:
[64]
Yuchi Tian, Ziyuan Zhong, Vicente Ordonez, Gail E. Kaiser, and Baishakhi Ray. 2020. Testing DNN image classifiers for confusion & bias errors. In Proceedings of the 42nd International Conference on Software Engineering (ICSE’20), Gregg Rothermel and Doo-Hwan Bae (Eds.). ACM, 1122–1134. DOI:
[65]
Radhika D. Venkatasubramanyam, Shrinath Gupta, and Himanshu Kumar Singh. 2013. Prioritizing code clone detection results for clone management. In Proceedings of the 7th International Workshop on Software Clones (IWSC’13). IEEE, 30–36.
[66]
Pengcheng Wang, Jeffrey Svajlenko, Yanzhao Wu, Yun Xu, and Chanchal K. Roy. 2018. CCAligner: A token-based large-gap clone detector. In Proceedings of the 40th International Conference on Software Engineering (ICSE’18), Michel Chaudron, Ivica Crnkovic, Marsha Chechik, and Mark Harman (Eds.). ACM, 1066–1077. DOI:
[67]
[68]
Yuhao Zhang, Yifan Chen, Shing-Chi Cheung, Yingfei Xiong, and Lu Zhang. 2018. An empirical study on TensorFlow program bugs. In Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA’18), Frank Tip and Eric Bodden (Eds.). ACM, 129–140. DOI:

Cited By

View all
  • (2024)Contract-based Validation of Conceptual Design Bugs for Engineering Complex Machine Learning SoftwareProceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems10.1145/3652620.3688201(155-161)Online publication date: 22-Sep-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Software Engineering and Methodology
ACM Transactions on Software Engineering and Methodology  Volume 32, Issue 6
November 2023
949 pages
ISSN:1049-331X
EISSN:1557-7392
DOI:10.1145/3625557
  • Editor:
  • Mauro Pezzè
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 September 2023
Online AM: 03 July 2023
Accepted: 20 June 2023
Revised: 14 June 2023
Received: 27 July 2022
Published in TOSEM Volume 32, Issue 6

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Deep learning software
  2. code clone
  3. co-changed clone

Qualifiers

  • Research-article

Funding Sources

  • National Natural Science Foundation of China
  • Natural Science Foundation of Hubei Province of China
  • Knowledge Innovation Program of Wuhan-Shuguang Project

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)251
  • Downloads (Last 6 weeks)6
Reflects downloads up to 23 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Contract-based Validation of Conceptual Design Bugs for Engineering Complex Machine Learning SoftwareProceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems10.1145/3652620.3688201(155-161)Online publication date: 22-Sep-2024

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media