From Clarity to Conviction: Instrumental Limits and Integration Pathways for Generative Artificial Intelligence in University Ideological and Political Education

Authors

  • Zhihao Wei Chongqing College of Humanities, Science & Technology
  • Zhen Liu Chongqing College of Humanities, Science & Technology
  • Tao Wang Guangzhou Xinhua University

DOI:

https://doi.org/10.62177/jetp.v2i3.552

Keywords:

Generative Artificial Intelligence, Ideological and Political Education (IPE), Chinese Higher Education

Abstract

This qualitative study examines how generative artificial intelligence is being integrated into university ideological and political education (IPE) in China and delineates the conditions under which its instrumental rationality reaches its practical limits. We conducted semi-structured interviews with 17 instructors from five universities in Chongqing (45–120 minutes, in Chinese), audio-recorded, transcribed verbatim, and analyzed using reflexive thematic analysis (RTA). Sampling and stopping were guided by information power; we judged data adequacy when the developing patterns were sufficiently rich and useful for the research questions. NVivo 12 supported data management. We identified three themes: attenuation of affective and faith dimensions; content complexity and the limits of AI understanding; and insufficiency of high-quality, compliant training data. Building on these findings, we propose an integration framework that aligns classroom practice with platform support and institutional governance, and we formulate actionable recommendations for policymakers, universities, and instructors.

Downloads

Download data is not yet available.

References

Central Committee of the Communist Party of China, & State Council. (2019). China’s Education Modernization 2035. https://www.gov.cn/zhengce/2019-02/23/content_5367987.htm

General Office of the CPC Central Committee, & General Office of the State Council. (2019). Opinions on deepening the reform and innovation of ideological and political theory courses in the new era. https://www.qstheory.cn/yaowen/2019-08/14/c_1124876471.htm

Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Learning, Media and Technology, 44(2), 77–91. https://doi.org/10.1080/17439884.2019.1583671

State Council of the People’s Republic of China. (2017). New Generation Artificial Intelligence Development Plan. https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm

Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5–6), 304–317. https://doi.org/10.1504/IJTEL.2012.051816

Luckin, H., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson. https://discovery.ucl.ac.uk/id/eprint/1475756/

UNESCO. (2021). AI and education: Guidance for policymakers. UNESCO Publishing. https://unesdoc.unesco.org/ark:/48223/pf0000376709

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education: Where are the educators? Computers & Education, 137, 103–121. https://doi.org/10.1016/j.compedu.2019.103615

Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., Lee, J., Mann, M., Merhout, F., & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216–9221. https://doi.org/10.1073/pnas.1804840115

Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132. https://doi.org/10.1126/science.aaa1160

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2–3), 87–105. https://doi.org/10.1016/S1096-7516(00)00016-6

Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education, 7(2), 95–105. https://doi.org/10.1016/j.iheduc.2004.02.001

Mittelstadt, D. A., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679

OECD. (2021). OECD digital education outlook 2021: Pushing the frontiers with AI, blockchain and robots. OECD Publishing. https://doi.org/10.1787/589b283f-en

Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. https://doi.org/10.1177/0002764213479366

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597. https://doi.org/10.1080/2159676X.2019.1628806

Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26(2), 582–599. https://doi.org/10.1007/s40593-016-0110-3

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Biesta, G. (2009). Good education in an age of measurement: On the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability, 21(1), 33–46. https://doi.org/10.1007/s11092-008-9064-9

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723

Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y., & Madotto, A. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), Article 248. https://doi.org/10.1145/3571730

Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Kuang, L., Lewis, M., Yih, W.-T., Rocktäschel, T., & Riedel, S. (2020). Retrieval-augmented generation for knowledge-intensive NLP. arXiv. https://arxiv.org/abs/2005.11401

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596

Downloads

How to Cite

Wei, Z., Liu, Z., & Wang, T. (2025). From Clarity to Conviction: Instrumental Limits and Integration Pathways for Generative Artificial Intelligence in University Ideological and Political Education. Journal of Educational Theory and Practice, 2(3). https://doi.org/10.62177/jetp.v2i3.552

Issue

Section

Articles

DATE

Received: 2025-08-26
Accepted: 2025-09-05
Published: 2025-09-11