Explainability, Human Oversight, and Procedural Justice in AI-Assisted Promotion Decisions: An Integrative Review for Chinese Organizations
DOI:
https://doi.org/10.62177/amit.v2i2.1341Keywords:
Explainable AI, Human Oversight, Procedural Justice, Promotion Decisions, Chinese Organizations, HR AnalyticsAbstract
AI tools are increasingly used to support internal talent decisions, yet promotion decisions pose a distinct governance problem because they involve future potential, not only past performance. Existing research has concentrated on recruitment screening or model accuracy, while the combined role of explainability, human oversight, and procedural justice in promotion contexts remains less settled. This paper develops an integrative review of research across human resource management, information systems, human-computer interaction, and AI governance to examine how managers and employees may respond to AI-assisted promotion decisions, with particular attention to Chinese organizations. Four conclusions emerge. First, explanations can improve perceived transparency, but they do not automatically protect users from poor AI advice. Second, human oversight only adds value when managers have both the authority and the criteria to question model output. Third, fairness in promotion decisions depends on voice, correctability, relevance of data, and accountability, rather than on statistical performance alone. Fourth, the Chinese regulatory context places additional emphasis on transparent and fair automated decision-making, which makes documentation, review, and appeal mechanisms especially important. On that basis, the paper proposes a practical framework for responsible AI-assisted promotion decisions built around data governance, interpretable evidence, structured human review, and employee contestability. The central argument is that organizations should not aim for uncritical trust in AI. They should aim for disciplined, reviewable, and job-relevant use.
Downloads
References
Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4), 15-42. https://doi.org/10.1177/0008125619867910
Budhwar, P., Malik, A., De Silva, M. T. T., & Thevisuthan, P. (2022). Artificial intelligence-challenges and opportunities for international HRM: A review and research agenda. The International Journal of Human Resource Management, 33(6), 1065-1097. https://doi.org/10.1080/09585192.2022.2035161
Prikshat, V., Islam, M., Patel, P., Malik, A., Budhwar, P., & Gupta, S. (2023). AI-augmented HRM: Literature review and a proposed multilevel framework for future research. Technological Forecasting and Social Change, 193, 122645. https://doi.org/10.1016/j.techfore.2023.122645
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn't fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149-167. https://doi.org/10.1016/j.obhdp.2020.03.008
Standing Committee of the National People's Congress. (2021). Personal Information Protection Law of the People's Republic of China, Article 24.
Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386-400. https://doi.org/10.1037/0021-9010.86.3.386
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Bauer, K., von Zahn, M., & Hinz, O. (2023). Expl(AI)ned: The impact of explainable artificial intelligence on users' information processing. Information Systems Research, 34(4), 1582-1602. https://doi.org/10.1287/isre.2023.1199
Nussberger, A.-M., Luo, L., Celis, L. E., & Crockett, M. J. (2022). Public attitudes value interpretability but prioritize accuracy in artificial intelligence. Nature Communications, 13, 5821. https://doi.org/10.1038/s41467-022-33417-3
Cecil, J., Lermer, E., Hudecek, M. F. C., Sauer, J., & Gaube, S. (2024). Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task. Scientific Reports, 14, 9736. https://doi.org/10.1038/s41598-024-60220-5
Deck, L., Schoeffer, J., De-Arteaga, M., & Kühl, N. (2024). A critical survey on fairness benefits of explainable AI. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24) (pp. 1579-1595). https://doi.org/10.1145/3630106.3658990
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434. https://doi.org/10.1177/0018720814547570
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://doi.org/10.1037/xge0000033
Langer, M., & Landers, R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior, 123, 106878. https://doi.org/10.1016/j.chb.2021.106878
Yurrita, M., Draws, T., Balayn, A., Murray-Rust, D., Tintarev, N., & Bozzon, A. (2023). Disentangling fairness perceptions in algorithmic decision-making: The effects of explanations, human oversight, and contestability. In CHI 2023 - Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Article 134, 1-21. https://doi.org/10.1145/3544548.3581161
Neumann, M., Niessen, A. S. M., Linde, M., Tendeiro, J. N., & Meijer, R. R. (2024). 'Adding an egg' in algorithmic decision making: Improving stakeholder and user perceptions, and predictive validity by enhancing autonomy. European Journal of Work and Organizational Psychology, 33(3), 245-262. https://doi.org/10.1080/1359432X.2023.2260540
Kochling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13(3), 795-848. https://doi.org/10.1007/s40685-020-00134-w
Ochmann, J., Michels, L., Tiefenbeck, V., Maier, C., & Laumer, S. (2024). Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism in algorithmic recruiting. Information Systems Journal, 34(2), 384-414. https://doi.org/10.1111/isj.12482
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 1-16. https://doi.org/10.1177/2053951718756684
Meijerink, J., & Bondarouk, T. (2023). The duality of algorithmic management: Toward a research agenda on HRM algorithms, autonomy and value creation. Human Resource Management Review, 33(1), 100876. https://doi.org/10.1016/j.hrmr.2021.100876
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x
Downloads
Issue
Section
License
Copyright (c) 2026 Peng Zhang, Rozaini Binti Rosli

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








