Education, Science, Technology, Innovation and Life
Open Access
Sign In

Deep-Context-Awareness-Based LLM Code Generation and Accurate-Defect-Repair Integrated Architecture

Download as PDF

DOI: 10.23977/jaip.2025.080217 | Downloads: 9 | Views: 231

Author(s)

Jiashun Guo 1

Affiliation(s)

1 USANA Health Sciences, Beijing, 100036, China

Corresponding Author

Jiashun Guo

ABSTRACT

In response to the issues of context fragmentation and delayed defect repair in large language model (LLM) code generation, this paper proposes a deep context-aware generation-repair fusion architecture. Through a bidirectional collaborative mechanism, it achieves a paradigm shift towards "generation as correctness." This architecture innovatively constructs multi-granularity context encoding models, dynamically integrating code structure, developer intent, and project-level constraints. It combines a neural-symbolic collaboration framework to deeply couple LLM's generative capabilities with the reliability of formal verification. The —— generation module is implemented using graph attention networks to achieve cross-file semantic association, while the repair module accurately locates and fixes defects through probabilistically guided patch search and hierarchical verification strategies. The architecture supports multi-objective optimization and human feedback reinforcement learning (RLHF), balancing code quality, performance, and security requirements, and generates traceable decision chains to ensure ethical compliance. This study provides a new generation of solutions for automated software engineering that combines efficiency and credibility, and lays the technical foundation for future directions such as multimodal context expansion and quantized code analysis.

KEYWORDS

Deep context awareness; LLM code; precise defect repair; fusion framework

CITE THIS PAPER

Jiashun Guo, Deep-Context-Awareness-Based LLM Code Generation and Accurate-Defect-Repair Integrated Architecture. Journal of Artificial Intelligence Practice (2025) Vol. 8: 125-133. DOI: http://dx.doi.org/10.23977/jaip.2025.080217.

REFERENCES

[1] Xizao Wang, Tianqi Shen, Xiangrong Bin, et al. LLM-enabled Datalog Code Translation Technology and Incremental Program Analysis Framework[J/OL]. Journal of Software,1-21[2025-04-30].
[2] Wang ZP, He TK, Zhao RY, et al. Exploring the capability of large language models in code optimization tasks and improvement methods[J/OL]. Journal of Software,1-24[2025-04-30].
[3] Xie Mengfei, Fu Jianming, Yao Renyi. Research on fuzzy testing of multimedia native libraries based on LLM[J]. Information Network Security, 2025,25(03):403-414.
[4] HUANG Tianbo, LI Chengyang, LIU Yongzhi, et al. LIME-based sample generation technique for malicious code countermeasures[J]. Journal of Beijing University of Aeronautics and Astronautics,2022,48(02):331-338.
[5] Chu Leyang, Wang Hao, Chen Xiangdong. Artificial intelligence education for youth oriented to large language model[J]. China Electrochemical Education,2024,(04):32-44.

Downloads: 15023
Visits: 473665

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.