skip to main content
10.1145/3540250.3558950acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Understanding automated code review process and developer experience in industry

Published: 09 November 2022 Publication History

Abstract

Code Review Automation can reduce human efforts during code review by automatically providing valuable information to reviewers. Nevertheless, it is a challenge to automate the process for large-scale companies, such as Samsung Electronics, due to their complexity: various development environments, frequent review requests, huge size of software, and diverse process among the teams.
In this work, we show how we automated the code review process for those intricate environments, and share some lessons learned during two years of operation. Our unified code review automation system, Code Review Bot, is designed to process review requests holistically regardless of such environments, and checks various quality-assurance items such as potential defects in the code, coding style, test coverage, and open source license violations.
Some key findings include: 1) about 60% of issues found by Code Review Bot were reviewed and fixed in advance of product releases, 2) more than 70% of developers gave positive feedback about the system, 3) developers rapidly and actively responded to reviews, and 4) the automation did not much affect the amount or the frequency of human code reviews compared to the internal policy to encourage code review activities. Our findings provide practical evidence that automating code review helps assure software quality.

References

[1]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2019. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. arxiv:1910.10045.
[2]
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok. 2019. WhoDo: Automating Reviewer Suggestions at Scale. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2019). Association for Computing Machinery, New York, NY, USA. 937–945. isbn:9781450355728 https://doi.org/10.1145/3338906.3340449
[3]
Nathaniel Ayewah, William Pugh, J. David Morgenthaler, John Penix, and YuQian Zhou. 2007. Evaluating Static Analysis Defect Warnings on Production Software. In Proceedings of the 7th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering (PASTE ’07). Association for Computing Machinery, New York, NY, USA. 1–8. isbn:9781595935953 https://doi.org/10.1145/1251535.1251536
[4]
Alberto Bacchelli and Christian Bird. 2013. Expectations, Outcomes, and Challenges of Modern Code Review. In Proceedings of the 2013 International Conference on Software Engineering (ICSE ’13). IEEE Press, NY, USA. 712–721. isbn:9781467330763
[5]
Vipin Balachandran. 2013. Fix-it: An extensible code auto-fix component in Review Bot. In 2013 IEEE 13th International Working Conference on Source Code Analysis and Manipulation (SCAM). IEEE Computer Society, NY, USA. 167–172. https://doi.org/10.1109/SCAM.2013.6648198
[6]
Vipin Balachandran. 2013. Reducing Human Effort and Improving Quality in Peer Code Reviews Using Automatic Static Analysis and Reviewer Recommendation. In Proceedings of the 2013 International Conference on Software Engineering (ICSE ’13). IEEE Press, NY, USA. 931–940. isbn:9781467330763 https://doi.org/10.1109/ICSE.2013.6606642
[7]
Christian Bird, Trevor Carnahan, and Michaela Greiler. 2015. Lessons Learned from Building and Deploying a Code Review Analytics Platform. In Proceedings of the 12th Working Conference on Mining Software Repositories (MSR ’15). IEEE Press, NY, USA. 191–201. isbn:9780769555942 https://doi.org/10.1109/MSR.2015.25
[8]
Cristiano Calcagno, Dino Distefano, Jeremy Dubreil, Dominik Gabi, Pieter Hooimeijer, Martino Luca, Peter O’Hearn, Irene Papakonstantinou, Jim Purbrick, and Dulma Rodriguez. 2015. Moving Fast with Software Verification. In NASA Formal Methods, Klaus Havelund, Gerard Holzmann, and Rajeev Joshi (Eds.). Springer International Publishing, Cham. 3–11. isbn:978-3-319-17524-9
[9]
Shyam R. Chidamber and Chris F. Kemerer. 1991. Towards a Metrics Suite for Object Oriented Design. In Conference Proceedings on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA ’91). Association for Computing Machinery, New York, NY, USA. 197–211. isbn:0201554178 https://doi.org/10.1145/117954.117970
[10]
Dino Distefano, Manuel Fähndrich, Francesco Logozzo, and Peter W. O’Hearn. 2019. Scaling Static Analyses at Facebook. Commun. ACM, 62, 8 (2019), July, 62–70. issn:0001-0782 https://doi.org/10.1145/3338112
[11]
GitHub. 2021. GitHub. Retrieved from. https://github.com/
[12]
Pooyan Jamshidi, Claus Pahl, Nabor C. Mendonça, James Lewis, and Stefan Tilkov. 2018. Microservices: The Journey So Far and Challenges Ahead. IEEE Software, 35, 3 (2018), 24–35. https://doi.org/10.1109/MS.2018.2141039
[13]
Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. 2013. Why Don’t Software Developers Use Static Analysis Tools to Find Bugs? In Proceedings of the 2013 International Conference on Software Engineering (ICSE ’13). IEEE Press, NY, USA. 672–681. isbn:9781467330763
[14]
Yasutaka Kamei, Emad Shihab, Bram Adams, Ahmed E. Hassan, Audris Mockus, Anand Sinha, and Naoyasu Ubayashi. 2013. A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering, 39, 6 (2013), 757–773. https://doi.org/10.1109/TSE.2012.70
[15]
Chaiyakarn Khanan, Worawit Luewichana, Krissakorn Pruktharathikoon, Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, Morakot Choetkiertikul, Chaiyong Ragkhitwetsagul, and Thanwadee Sunetnanta. 2020. JITBot: An Explainable Just-in-Time Defect Prediction Bot. Association for Computing Machinery, New York, NY, USA. 1336–1339. isbn:9781450367684 https://doi.org/10.1145/3324884.3415295
[16]
Geunsik Lim, MyungJoo Ham, Jijoong Moon, and Wook Song. 2021. LightSys: Lightweight and Efficient CI System for Improving Integration Speed of Software. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE Press, NY, USA. 1–10. https://doi.org/10.1109/ICSE-SEIP52600.2021.00009
[17]
Redis Ltd. 2021. Redis. Retrieved from. https://redis.io/
[18]
Perforce. 2021. Helix Swarm: Free Code Review Tool For Helix Core. Retrieved from. https://www.perforce.com/products/helix-swarm/
[19]
Rachel Potvin and Josh Levenberg. 2016. Why Google Stores Billions of Lines of Code in a Single Repository. Commun. ACM, 59, 7 (2016), June, 78–87. issn:0001-0782 https://doi.org/10.1145/2854146
[20]
Gerrit Code Review. 2021. Gerrit Code Review. Retrieved from. https://www.gerritcodereview.com/
[21]
Caitlin Sadowski, Edward Aftandilian, Alex Eagle, Liam Miller-Cushon, and Ciera Jaspan. 2018. Lessons from building static analysis tools at google. Commun. ACM, 61, 4 (2018), 58–66.
[22]
Caitlin Sadowski, Emma Söderberg, Luke Church, Michal Sipko, and Alberto Bacchelli. 2018. Modern Code Review: A Case Study at Google. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP ’18). Association for Computing Machinery, New York, NY, USA. 181–190. isbn:9781450356596 https://doi.org/10.1145/3183519.3183525
[23]
Caitlin Sadowski, Jeffrey van Gogh, Ciera Jaspan, Emma Söderberg, and Collin Winter. 2015. Tricorder: Building a Program Analysis Ecosystem. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE ’15). IEEE Press, NY, USA. 598–608. isbn:9781479919345
[24]
Nishrith Saini and Ricardo Britto. 2021. Using Machine Intelligence to Prioritise Code Review Requests. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE Press, NY, USA. 11–20. https://doi.org/10.1109/ICSE-SEIP52600.2021.00010
[25]
Mary Shaw and David Garlan. 1996. Software Architecture: Perspectives on an Emerging Discipline. Prentice-Hall, Inc., USA. isbn:0131829572
[26]
Devarshi Singh, Varun Ramachandra Sekar, Kathryn T. Stolee, and Brittany Johnson. 2017. Evaluating how static analysis tools can reduce code review effort. In 2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, NY, USA. 101–105. https://doi.org/10.1109/VLHCC.2017.8103456
[27]
Ask Solem. 2018. Celery - Distributed Task Queue. Retrieved from. https://docs.celeryproject.org/
[28]
Rosalia Tufano, Luca Pascarella, Michele Tufano, Denys Poshyvanyk, and Gabriele Bavota. 2021. Towards Automating Code Review Activities. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE Press, NY, USA. 163–174. https://doi.org/10.1109/ICSE43902.2021.00027
[29]
Mairieli Wessel, Bruno Mendes de Souza, Igor Steinmacher, Igor S. Wiese, Ivanilton Polato, Ana Paula Chaves, and Marco A. Gerosa. 2018. The Power of Bots: Characterizing and Understanding Bots in OSS Projects. Proc. ACM Hum.-Comput. Interact., 2, CSCW (2018), Article 182, Nov., 19 pages. https://doi.org/10.1145/3274451
[30]
Mairieli Wessel, Alexander Serebrenik, Igor Wiese, Igor Steinmacher, and Marco A. Gerosa. 2020. What to Expect from Code Review Bots on GitHub? A Survey with OSS Maintainers. In Proceedings of the 34th Brazilian Symposium on Software Engineering (SBES ’20). Association for Computing Machinery, New York, NY, USA. 457–462. isbn:9781450387538 https://doi.org/10.1145/3422392.3422459

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEC/FSE 2022: Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
November 2022
1822 pages
ISBN:9781450394130
DOI:10.1145/3540250
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. code review
  2. code review automation
  3. review bot
  4. static analysis

Qualifiers

  • Research-article

Conference

ESEC/FSE '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 112 of 543 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)108
  • Downloads (Last 6 weeks)15
Reflects downloads up to 30 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media