Middle Managers and AI-Assisted Performance Appraisal in Chinese Enterprises: Explainability, AI Literacy, and the Case for Human Oversight
DOI:
https://doi.org/10.62177/apemr.v3i2.1342Keywords:
AI-Assisted Performance Appraisal, Middle Managers, Explainability, AI Literacy, Human Oversight, Algorithmic Fairness, Chinese EnterprisesAbstract
As AI-based appraisal tools move from pilot projects into daily HR routines, middle managers face a practical dilemma. They are asked to use algorithmic scores and model-generated recommendations in rating, feedback, promotion, and compensation decisions, yet they remain accountable for explaining those decisions to employees. This article offers a structured review of research on AI-assisted performance appraisal, algorithmic management, explainability, AI literacy, and procedural fairness. Rather than treating managerial confidence as a simple matter of technological acceptance, the article argues that what matters is calibrated reliance: the ability to use AI outputs seriously without surrendering judgment to them. Three conditions recur across the literature. First, explainability reduces procedural opacity and makes appraisal outcomes easier to defend. Second, AI literacy equips managers to interpret outputs, detect limitations, and resist both blind trust and reflexive rejection. Third, human oversight preserves accountability, contextual correction, and respectful treatment in employee-facing decisions. Building on these themes, the article develops an integrative framework linking explainability, AI literacy, and oversight to middle managers' reliance on AI-assisted appraisal. The discussion then considers practical implications for Chinese enterprises, where performance evaluation often carries strong consequences for pay, promotion, and internal mobility. The paper contributes to the AI-HRM literature by shifting attention from adoption alone to the managerial conditions under which AI-supported appraisal becomes usable, legitimate, and organizationally defensible.
Downloads
References
Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4), 15-42. https://doi.org/10.1177/0008125619867910
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577-586. https://doi.org/10.1016/j.bushor.2018.03.007
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410. https://doi.org/10.5465/annals.2018.0174
Pan, Y., & Froese, F. J. (2023). An interdisciplinary review of AI and HRM: Challenges and future directions. Human Resource Management Review, 33(1), 100924. https://doi.org/10.1016/j.hrmr.2022.100924
Varma, A., Dawkins, C., & Chaudhuri, K. (2023). Artificial intelligence and people management: A critical assessment through the ethical lens. Human Resource Management Review, 33(1), 100923. https://doi.org/10.1016/j.hrmr.2022.100923
Rodgers, W., Murray, J. M., Stefanidis, A., Degbey, W. Y., & Tarba, S. Y. (2023). An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Human Resource Management Review, 33(1), 100925. https://doi.org/10.1016/j.hrmr.2022.100925
Langer, M., & Konig, C. J. (2023). Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management. Human Resource Management Review, 33(1), 100881. https://doi.org/10.1016/j.hrmr.2021.100881
Tong, S., Jia, N., Luo, X., & Fang, Z. (2021). The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42(9), 1600-1631. https://doi.org/10.1002/smj.3322
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 1-16. https://doi.org/10.1177/2053951718756684
Robert, L. P., Pierce, C., Marquis, L., Kim, S., & Alahmad, R. (2020). Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Human-Computer Interaction, 35(5-6), 545-575. https://doi.org/10.1080/07370024.2020.1735391
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://doi.org/10.1037/xge0000033
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155-1170. https://doi.org/10.1287/mnsc.2016.2643
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103. https://doi.org/10.1016/j.obhdp.2018.12.005
Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-16). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376727
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
Pinski, M., Hofmann, T., & Benlian, A. (2024). AI literacy for the top management: An upper echelons perspective on corporate AI orientation and implementation ability. Electronic Markets, 34(1), 1-23. https://doi.org/10.1007/s12525-024-00707-1
Qin, S., Jia, N., Luo, X., Liao, C., & Huang, Z. (2023). Perceived fairness of human managers compared with artificial intelligence in employee performance evaluation. Journal of Management Information Systems, 40(4), 1039-1070. https://doi.org/10.1080/07421222.2023.2267316
Chun, J. S., De Cremer, D., Oh, E. J., & Kim, Y. (2024). What algorithmic evaluation fails to deliver: Respectful treatment and individualized consideration. Scientific Reports, 14(1), 25996. https://doi.org/10.1038/s41598-024-76320-1
Downloads
How to Cite
Issue
Section
License
Copyright (c) 2026 Peng Zhang, Rozaini Binti Rosli

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








