Technical Implementation of Large Language Models in Educational Scenarios: A Case Study of DeepSeek
DOI:
https://doi.org/10.62177/amit.v1i3.472Keywords:
Large Language Models (LLMs), Educational Technology, DeepSeekAbstract
Large Language Models (LLMs) present transformative potential for education, yet their practical deployment faces persistent challenges in domain knowledge adaptation, dynamic interaction design, and ethics-compliance. This paper proposes and validates a pedagogical principle-driven framework for implementing the general-purpose LLM DeepSeek in K-12 to tertiary educational scenarios. Through a mixed-methods approach (technical benchmarking + empirical field trials), we demonstrate that DeepSeek’s three-core strategy.
(1) curriculum-grounded knowledge graph augmentation,
(2) pedagogically aligned multimodal architecture, and
(3) collaborative teacher-in-the-loop refinement—effectively resolves critical conflicts between educational causality and AI stochasticity. Furthermore, we systematize domain-specific technical requirements, including:
Cross-modal alignment of symbolic-natural language systems (e.g., mathematical formalization), Sub-second dynamic feedback efficiency (<300ms latency), Federated learning solutions mitigating data privacy risks (7.2% utility loss vs. 39.2% baseline).Empirical studies across 42 institutions confirm that the optimized framework elevates:STEM problem-solving accuracy to >90% (Δ+21.8% vs. generic models), Student knowledge retention by 22.4% (p<0.001), Teacher adoption rates to 89% (SUS score).
This work provides a transferable paradigm for human-centered, ethically grounded LLM deployment in global education ecosystems.
Downloads
References
Sun, Q. B. (2020). Public risks of AI algorithm applications and their dual regulation. *Administrative Law Review, 4*, 25-36.
Yuan, K., & Yan, H. Y. (2022). Logical determination and institutional construction of data classification and hierarchical protection: Focusing on identification and control of important data. *China Science and Technology Forum, 7*, 1-10.
Zhang, T. (2022). Exploring risk control dimensions for personal information protection. *Law Science, 6*, 102-115.
Tian, Y. (2024). Generative AI-assisted government automated decision-making: Role positioning and development path. *Price: Theory & Practice, 9*, 45-49.
Government of Canada. (2023, April 25). *Directive on Automated Decision-Making* [Policy]. Retrieved March 29, 2025, from https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592.
Standing Committee of the National People's Congress. (2021). *Personal Information Protection Law of the People's Republic of China* [Law]. Retrieved March 29, 2025, from https://www.gov.cn/xinwen/2021-08/20/content_5632486.html.
Downloads
Issue
Section
License
Copyright (c) 2025 Pengfei Zhao, Xin Wan

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.