skip to main content
10.1145/3700297.3700342acmotherconferencesArticle/Chapter ViewAbstractPublication PagesisaieConference Proceedingsconference-collections
research-article

Opportunities and Challenges in the Cultivation of Software Development Professionals in the Context of Large Language Models

Published: 18 November 2024 Publication History

Abstract

In the context of the rapid development of Large Language Models (LLMs), the field of software development has undergone significant transformations presenting both opportunities and challenges for software development professional cultivation. This study systematically analyzes the applications of LLMs in software development and their impact on this cultivation, exploring the opportunities and challenges in enhancing programming efficiency, promoting personalized learning, improving interdisciplinary skills, and addressing over-reliance on LLMs and related tools. Through literature analysis, this study reviews the impact of LLMs on programming efficiency, code quality, and project management and evaluates the requirements and directions for professional cultivation in response to these changes. The research results indicate that while LLMs bring numerous opportunities, they also pose challenges such as rapid technological updates and a tendency toward over-reliance on tools. Therefore, this study proposes a series of optimized professional cultivation strategies to adapt to the technological developments and industry demands of the new era, thereby enhancing the capability of higher education institutions to cultivate software professionals who meet future needs.

References

[1]
Andrew Caines, Luca Benedetto, Shiva Taslimipoor, et al. “On the application of Large Language Models for language teaching and assessment technology.” arXiv, 2023, 2307.08393.
[2]
Nadzeya Laurentsyeva, Johannes Wachs. “Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow.” arXiv, 2023, 2307.07367.
[3]
Danning Xie, Byung Kyu Yoo, Mijung Kim, et al. “Impact of Large Language Models on Generating Software Specifications.” arXiv, 2023, abs/2306.03324.
[4]
Narendra Kumar. “Large Language Models Humanize Technology.” arXiv, 2023, abs/2305.05576.
[5]
R. Bharadwaj. “Double-edged sword of large language models: mitigating security risks of AI-generated code.” Proceedings of SPIE, 2023.
[6]
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen. “A Survey of Large Language Models.” arXiv, 2023. http://arxiv.org/pdf/2303.18223.
[7]
Tuan Dinh, Jinman Zhao, Samson Tan, Renato Negrinho, Leonard Lausen, Sheng Zha, George Karypis. “Large Language Models of Code Fail at Completing Code with Potential Bugs.” arXiv, 2023. http://arxiv.org/pdf/2306.03438.
[8]
Pier Luca Lanzi, Daniele Loiacono. “ChatGPT and Other Large Language Models as Evolutionary Engines for Online Interactive Collaborative Game Design.” Proceedings of the Genetic and Evolutionary Computation Conference, 2023.
[9]
Emilee Booth Chapman. “Large Language Models.” In: Advances in Artificial Intelligence, Springer, 2023, 93-125.
[10]
Humza Naveed, Asad Ullah Khan, Shi Qiu, et al. “A Comprehensive Overview of Large Language Models.” arXiv, 2023.
[11]
Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy. “Autoformalization with Large Language Models.” arXiv, 2022. http://arxiv.org/pdf/2205.12615.
[12]
Archana Tikayat Ray, Bjorn F Cole, Olivia J Pinon Fischer, Anirudh Prabhakara Bhat, Ryan T White, Dimitri N Mavris. “Agile Methodology for the Standardization of Engineering Requirements using Large Language Models.” Preprints, 2023.
[13]
Jaromir Savelka, Arav Sri Agarwal, Marshall An, et al. “Thrilled by Your Progress! Large Language Models (GPT-4) No Longer Struggle to Pass Assessments in Higher Education Programming Courses.” arXiv, 2023, abs/2306.10073. http://arxiv.org/pdf/2306.10073.
[14]
Daniel Leiker, Mutlu Cukurova. “Prototyping the use of Large Language Models (LLMs) for adult learning content creation at scale.” arXiv, 2023, abs/2306.01815.
[15]
Andrew Katz, Umair Shakir, Benita Steinberg Chambers. “The Utility of Large Language Models and Generative AI for Education Research.” arXiv, 2023, abs/2305.18125.
[16]
Sabina Elkins, Ekaterina Kochmar, Jackie C.K. Cheung, Iulian Serban. “How Useful are Educational Questions Generated by Large Language Models?” arXiv, 2023. http://arxiv.org/pdf/2304.06638.
[17]
Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier. “ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education.” Open Science Framework, 2023.
[18]
Tom Taulli. “Large Language Models.” In: Generative AI. Apress, Berkeley, CA, 2023.
[19]
LangGPT prompt word community. “How to Use GitHub Copilot: Tips Tricks and Use Cases.” SSPAI, 2024, 06-06. https://sspai.com/post/87441#!.
[20]
Michael R. Douglas. “Large Language Models.” In: Advances in Artificial Intelligence, Springer, 2023.
[21]
“DeepCode AI Review: AI-Driven Code Review to Improve Code Quality.” ToolsWorld AI, 2024, 06-06. https://toolsworld.ai/zh-hans/deepcode-ai/.
[22]
Dune Community Spice Room. “How Large Models Empower R&D? Practical Cases from ICBC, CICC, BMW, and Other Companies.” Shaqiu Community, 2024, 07-09. https://www.shaqiu.cn/article/JeRzYONQV6D2.
[23]
Junjie Wang, Yuchao Huang, Chunyang Chen, et al. “Software Testing with Large Language Model: Survey, Landscape, and Vision.” arXiv, 2023.
[24]
Schlag I, Sukhbaatar S, Celikyilmaz A, et al. “Large Language Model Programs.” arXiv, 2023. https://arxiv.org/abs/2305.05364.
[25]
Kannan J. “Can LLMs Configure Software Tools.” arXiv, 2023. https://arxiv.org/abs/2312.06121.
[26]
Bae H, Deeb A, Fleury A, et al. “ComplexityNet: Increasing LLM Inference Efficiency by Learning Task Complexity.” arXiv, 2023. https://arxiv.org/abs/2312.11511.
[27]
Bastola A, Wang H, Hembree J, et al. “LLM-based Smart Reply (LSR): Enhancing Collaborative Performance with ChatGPT-mediated Smart Reply System.” arXiv, 2023. https://arxiv.org/abs/2306.11980.
[28]
Liu Z, Zhang Y, Li P, et al. “Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization.” arXiv, 2023. https://arxiv.org/abs/2310.02170.
[29]
Shen H, Chang H, Dong B, et al. “Efficient LLM Inference on CPUs.” arXiv, 2023. https://arxiv.org/abs/2311.00502.
[30]
Xu Yuemei, Hu Ling, Zhao Jiayi, et al. “Technological Prospects and Risk Challenges of Large Language Models.” Computer Applications, 2023, 1-10.
[31]
Ye Chunyang. “Overview of Recommendation Technology for Large Language Models.” Electronic Components and Information Technology, 2023, vol. 7, 127-131.
[32]
European Parliament and Council. “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).” Official Journal of the European Union, 2016.

Index Terms

  1. Opportunities and Challenges in the Cultivation of Software Development Professionals in the Context of Large Language Models

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ISAIE '24: Proceedings of the 2024 International Symposium on Artificial Intelligence for Education
    September 2024
    651 pages
    ISBN:9798400707100
    DOI:10.1145/3700297
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 November 2024

    Check for updates

    Author Tags

    1. Cultivation
    2. Educational Reform
    3. Large Language Models
    4. Personalized Learning
    5. Software Development Professionals

    Qualifiers

    • Research-article

    Conference

    ISAIE 2024

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 28
      Total Downloads
    • Downloads (Last 12 months)28
    • Downloads (Last 6 weeks)21
    Reflects downloads up to 12 Jan 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media