Ethical Risks and Point-of-Use Governance of AI in IPE: Insights from Interviews with 17 Instructors

Authors

  • Zhihao Wei Chongqing College of Humanities, Science & Technology
  • Zhen Liu Chongqing College of Humanities, Science & Technology
  • Tao Wang Guangzhou Xinhua University
  • Lacong Yongzhen Lijiang Culture and Tourism College

DOI:

https://doi.org/10.62177/chst.v2i3.599

Keywords:

Generative Artificial Intelligence (AI), Ideological and Political Education (IPE), Chinese Higher Education, Ethical Risks and Governance, Thematic Analysis

Abstract

This study examines ethical risks and workable governance for artificial intelligence in university ideological and political education in China. Semi-structured interviews with 17 instructors from five universities in Chongqing conducted from March to June 2025 were analyzed using reflexive thematic analysis. Six themes characterize current practice: privacy and consent remain fragile in attendance, proctoring, and analytics; the teacher role shifts from authority to curator and ethical gatekeeper; recommendation and moderation shape visibility and the continuity of deliberation; assessment integrity benefits from process-based evidence and explicit disclosure; metric-driven activity targets can crowd out value reasoning; and governance and accountability depend on institutional capacity and consistent rules. The findings indicate that responsible integration requires governance at the point of use across classroom, platform, and institution, including course-level disclosure and granular consent, explainable moderation with instructor overrides and traceability, process-based assessment with AI-use disclosure, curated corpora linked to retrieval-augmented generation with citation binding, routine audits, and faculty training.

Downloads

Download data is not yet available.

References

Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32(4), 1052–1092. https://doi.org/10.1007/s40593-021-00285-9

Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587–604. https://doi.org/10.1162/tacl_a_00041

Bender, E. M., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922.

Biesta, G. (2009). Good education in an age of measurement: On the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability, 21(1), 33–46. https://doi.org/10.1007/s11092-008-9064-9

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative Research in Psychology, 18(3), 328–352. https://doi.org/10.1080/14780887.2020.1769238

Fuxiang Dong & Shuangli Dong. (2023). Research on the optimization of ideological and political education in universities integrating artificial intelligence technology under the guidance of curriculum ideological and political thinking. ACM Transactions on Asian and Low-Resource Language Information Processing. Advance online publication. https://doi.org/10.1145/3611012

Dong, Y. (2024). Ethical risks and countermeasures of ideological and political education integrated with artificial intelligence. Journal of Beijing University of Aeronautics and Astronautics (Social Sciences Edition), 37(1), 160–165. https://link.oversea.cnki.net/doi/10.13766/j.bhsk.1008-2204.2022.0955

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723

General Office of the CPC Central Committee, & General Office of the State Council. (2019). Opinions on deepening the reform and innovation of ideological and political theory courses in the new era . https://www.gov.cn/zhengce/2019-08/14/content_5421252.htm

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59–82. https://doi.org/10.1177/1525822X05279903

Guilherme, A. (2019). AI and education: The importance of teacher and student relations. AI & Society, 34(1), 47–54. https://doi.org/10.1007/s00146-017-0693-8

Hennink, M. M., Kaiser, B. N., & Marconi, V. C. (2017). Code saturation versus meaning saturation: How many interviews are enough? Qualitative Health Research, 27(4), 591–608. https://doi.org/10.1177/1049732316665344

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Buckingham Shum, S., Santos, O. C., Rodrigo, M. M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. https://doi.org/10.1007/s40593-021-00239-1

Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y., & Madotto, A. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), Article 248. https://doi.org/10.1145/3571730

Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., ... & Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neural information processing systems, 33, 9459-9474. https://doi.org/10.48550/arXiv.2005.11401

Ministry of Education et al. (2025). Opinions on accelerating education digitalization [In Chinese]. https://www.gov.cn/zhengce/zhengceku/202504/content_7019045.htm

Ministry of Education of the People’s Republic of China. (2024). National education digitalization strategy action continues to advance [In Chinese]. https://www.moe.gov.cn/jyb_xwfb/s5147/202406/t20240612_1135098.html

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In FAT ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency* (pp. 220–229). Association for Computing Machinery. https://doi.org/10.1145/3287560.3287596

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679

Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16, 1–13. https://doi.org/10.1177/1609406917733847

OECD. (2021). OECD digital education outlook 2021: Pushing the frontiers with artificial intelligence, blockchain and robots. OECD Publishing. https://doi.org/10.1787/589b283f-en

Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1509–1528. https://doi.org/10.1177/0002764213479366

State Council of the People’s Republic of China. (2017). Notice on issuing the New Generation Artificial Intelligence Development Plan. https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm

UNESCO. (2021). AI and education: Guidance for policy-makers. https://doi.org/10.54675/pcsp7350

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligenc

UNESCO. (2023). Global education monitoring report 2023: Technology in education—A tool on whose terms? https://doi.org/10.54676/BIUM3029

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education. International Journal of Educational Technology in Higher Education, 16, 39. https://doi.org/10.1186/s41239-019-0171-0

Downloads

How to Cite

Wei, Z., Liu, Z., Wang, T., & Yongzhen, L. (2025). Ethical Risks and Point-of-Use Governance of AI in IPE: Insights from Interviews with 17 Instructors. Critical Humanistic Social Theory, 2(3). https://doi.org/10.62177/chst.v2i3.599

Issue

Section

Articles

DATE

Received: 2025-09-10
Accepted: 2025-09-16
Published: 2025-09-26