A Framework for Integrating AI-Powered Systems to Mitigate Bias Risk in HRMFunctions
DOI:
https://doi.org/10.15170/MM.2025.59.02.05Keywords:
Artificial intelligence (AI), Human resources management (HRM), ), Diversity, Equity and Inclusion (DEI), BiasAbstract
Integrating AI-powered systems in human resource management (HRM) functions offers advantages in enhancing decision-making, automating processes, and improving efficiency. However, these promises are associated with progressively hidden patterns and automated decision processes that may bake in existing biases. Despite the recent surge of research in this emerging topic, there remains a substantial gap between Artificial Intelligence's (AI) potential and practices in HRM, particularly in addressing diversity, equity, and inclusion (DEI) concerns within AI-driven HRM systems. This paper aims to contribute to filling this gap by reviewing and analysing existing literature in this domain and addressing two fundamental questions: what types of bias sources may be generated by AI-powered tools that could affect the DEI efforts, and second, what approaches and practices can reduce bias in AI-powered systems in HRM functions. The literature synthesis resulted in a conceptual framework outlining approaches to mitigate bias in AI-powered HRM systems across four key dimensions. Finally, the paper emphasises the need to overcome discrimination challenges posed by HRM innovations and optimise AI tools to enhance DEI efforts within the HRM function.
References
Adadi, A., & Berrada, M. (2018), “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)”, IEEE Access, 6, 52138‒52160. DOI: 10.1109/access.2018.2870052
Alshaabani, A., Hamza, K. A., & Rudnák, I. (2021), “Impact of Diversity Management on Employees’ Engagement: The Role of Organizational Trust and Job Insecurity”, Sustainability, 14(1), 1–22. DOI: 10.3390/su14010420
András, P., Esterle, L., Guckert, M. et al. (2018), “Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems”, IEEE Technology and Society Magazine, 37(4), 76–83. DOI: 10.1109/MTS.2018.2876107
Arslan, A., Cooper, C., Khan, Z., Golgeci, I., & Ali, I. (2021), “Artificial intelligence and human workers interaction at team level: a conceptual assessment of the challenges and potential HRM strategies”, International Journal of Manpower, 43(1), 75–88. DOI: 10.1108/ijm-01-2021-0052
Baeza-Yates, R. (2018), “Bias on the web”, Communications of the ACM, 61(6), 54–61. DOI: 10.1145/3209581
Belenguer, L. (2022), “AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry”, AI and Ethics, 2(2). DOI: 10.1007/s43681-022-00138-8
Bellamy, R., Dey, K., Hind, M. et al. (2023), AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. ArXiv (Cornell University). DOI: 10.48550/arxiv.1810.01943
Bonilla-Silva, E. (2001), White supremacy and racism in the post-civil rights era. Lynne Rienner.
Borry, E. L., & Getha-Taylor, H. (2018), “Automation in the Public Sector: Efficiency at the Expense of Equity?”, Public Integrity, 21(1), 6–21. DOI: 10.1080/10999922.2018.1455488
Brown, J. G. (2024), “The Impact of Artificial Intelligence in Employee Onboarding Programs”, Advances in Developing Human Resources, 26(2‒3). DOI: 10.1177/15234223241254775
Budhwar P., Chowdhury S., Wood G. et al. (2023), “Human Resource Management in the Age of Generative Artificial intelligence: Perspectives and Research Directions on ChatGPT”, Human Resource Management in the Age of Generative Artificial Intelligence: Perspectives and Research Directions on ChatGPT, 33(3), 606–659. DOI: 10.1111/1748-8583.12524
Buolamwini, J., & Gebru, T. (2018), Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, in: Proceedings; PMLR. https://proceedings.mlr.press/v81/buolamwini18a.html [15.17.2025.]
Cappelli, P., Tambe, P., & Yakubovich, V. (2019a), “Artificial Intelligence in Human Resources Management: Challenges and a Path Forward”, SSRN Electronic Journal, 61(4). DOI: 10.2139/ssrn.3263878
Cappelli, P., Tambe, P., & Yakubovich, V. (2019b), “Can Data Science Change Human Resources?”, The Future of Management in an AI World, 93–115. DOI: 10.1007/978-3-030-20680-2_5
Chandar, P., Khazaeni, Y., Davis, M., Muller, M., Crasso, M., Liao, Q. V., Shami, N. S., & Geyer, W. (2017), “Leveraging Conversational Systems to Assists New Hires During Onboarding”, Human-Computer Interaction – INTERACT 2017, 381–391. DOI: 10.1007/978-3-319-67684-5_23
Charbonneau, É., & Doberstein, C. (2020), “An Empirical Assessment of the Intrusiveness and Reasonableness of Emerging Work Surveillance Technologies in the Public Sector”, Public Administration Review, 80(5), 780–791. DOI: 10.1111/puar.13278
Chen, Z. (2023), “Ethics and Discrimination in Artificial intelligence-enabled Recruitment Practices”, Humanities and Social Sciences Communications, 10(1), 1–12. DOI: 10.1057/s41599-023-02079-x
Dankwa-Mullan, I., & Weeraratne, D. (2022), “Artificial Intelligence and Machine Learning Technologies in Cancer Care: Addressing Disparities, Bias, and Data Diversity”, Cancer Discovery, 12(6), 1423–1427. DOI: 10.1158/2159-8290.cd-22-0373
Drage, E., & Mackereth, K. (2022), “Does AI Debias Recruitment? Race, Gender, and AI’s ‘Eradication of Difference’, Philosophy & Technology, 35(4). DOI: 10.1007/s13347-022-00543-1
Geman, S., Bienenstock, E., & Doursat, R. (1992), “Neural Networks and the Bias/Variance Dilemma”, Neural Computation, 4(1), 1–58. DOI: 10.1162/neco.1992.4.1.1
Giest, S. N., & Klievink, B. (2022), “More than a digital system: how AI is changing the role of bureaucrats in different organizational contexts”, Public Management Review, 26(2), 1–20. DOI: 10.1080/14719037.2022.2095001
Gillath, O., Ai, T., Branicky, M., Keshmiri, S., Davison, R., & Spaulding, R. (2020), “Attachment and Trust in Artificial Intelligence”, Computers in Human Behavior, , 0747-5632. DOI: 10.1016/j.chb.2020.106607
Grabovskyi, V., & Martynovych, O. (2019), “Facial recognition with using of the Microsoft face API service”, Electronics and Information Technologies, 12. DOI: 10.30970/eli.12.3
Gulati, A., Lozano, M. A., Lepri, B., & Oliver, N. (2022), BIASeD: Bringing Irrationality into Automated System Design, 1‒14. DOI: 10.48550/arXiv.2210.01122 [15.17.2025.]
Hill, R. K. (2015), “What an Algorithm Is”, Philosophy & Technology, 29(1), 35–59. DOI: 10.1007/s13347-014-0184-5
Hmud, B., & Lászlo, V. (2019), “Will Artificial Intelligence Take Over Human Resources Recruitment and Selection?”, Network Intelligence Studies, 7(13), 21–30. https://seaopenresearch.eu/Journals/articles/NIS_13_3.pdf [15.17.2025.]
Huang, H.-Y., & Cynthia. (2022), “Social Inclusion in Curated Contexts: Insights from Museum Practices”, in: 2022 ACM Conference on Fairness, Accountability, and Transparency. 300–309. DOI: 10.1145/3531146.3533095
Huang, M.-H., & Rust, R. T. (2021), “A Strategic Framework for Artificial Intelligence in Marketing”, Journal of the Academy of Marketing Science, 49(1), 30–50. DOI: 10.1007/s11747-020-00749-9
Johnson, B. A. M., Coggburn, J. D., & Llorens, J. J. (2022), “Artificial Intelligence and Public Human Resource Management: Questions for Research and Practice”, Public Personnel Management, 51(4). DOI: 10.1177/00910260221126498
Jora, R. B., Sodhi, K. K., Mittal, P., & Saxena, P. (2022), Role of Artificial Intelligence (AI) In meeting Diversity, Equality and Inclusion (DEI) Goals, IEEE Xplore. DOI: 10.1109/ICACCS54159.2022.9785266
Karácsony, P. (2022), “Analysis of the Attitude of Hungarian HR Professionals to Artificial Intelligence”, Naše Gospodarstvo/Our Economy, 68(2), 55–64. DOI: 10.2478/ngoe-2022-0011
Kekez, I., Lode Lauwaert, & Ređep, N. B. (2025), “Is Artificial Intelligence (AI) Research Biased and Conceptually Vague? A Systematic Review of Research on Bias and Discrimination in the context of using AI in Human Resource Management”, Technology in Society, 81. DOI: 10.1016/j.techsoc.2025.102818
Kochan, T., Bezrukova, K., Ely, R., Jackson, S., Joshi, A., Jehn, K., Leonard, J., Levine, D., & Thomas, D. (2003), “The effects of diversity on business performance: Report of the diversity research network”, Human Resource Management, 42(1), 3–21. DOI: 10.1002/hrm.10061
Köchling, A., Riazy, S., Wehner, M. C., & Simbeck, K. (2021), “Highly Accurate, But Still Discriminatory”, Business & Information Systems Engineering, 63(1), 39–54. DOI: 10.1007/s12599-020-00673-w
Köchling, A., & Wehner, M. C. (2022), “Better explaining the benefits why AI? Analyzing the impact of explaining the benefits of ai‐supported selection on applicant responses”, International Journal of Selection and Assessment, 31(1), 45–62. DOI: 10.1111/ijsa.12412
Lee, M. K. (2018), “Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management”, Big Data & Society, 5(1). DOI: 10.1177/2053951718756684
Letheren, K., Russell-Bennett, R., & Whittaker, L. (2020), “Black, white or grey magic? Our future with artificial intelligence”, Journal of Marketing Management, 36(3‒4), 216–232. DOI: 10.1080/0267257x.2019.1706306
Li, F., Dong, H., & Liu, L. (2020), “Using AI to Enable Design for Diversity: A Perspective”, in: Advances in Intelligent Systems and Computing, 77–84. DOI: 10.1007/978-3-030-51194-4_11
Lockey, S., Gillespie, N., Holm, D., & Someh, I. A. (2021), “A Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions”, in: Proceedings of the 54th Hawaii International Conference on System Sciences. DOI: 10.24251/hicss.2021.664
Malin, C. D., Fleiß, J., Seeber, I., Kubicek, B., Kupfer, C., & Thalmann, S. (2024), “The application of AI in digital HRM – an experiment on human decision-making in personnel selection”, Business Process Management Journal, 30(8), 284–312. DOI: 10.1108/bpmj-11-2023-0884
Marabelli, M., & Lirio, P. (2025), “AI and the metaverse in the workplace: DEI opportunities and challenges, Personnel Review, 54(3), 844–853. DOI: 10.1108/pr-04-2023-0300
Margherita, A. (2022), Human resources analytics: A systematization of research topics and directions for future research”, Human Resource Management Review, 32(2). DOI: 10.1016/j.hrmr.2020.100795
Murugesan, U., Subramanian, P., Srivastava, S., & Dwivedi, A. (2023), “A Study of Artificial Intelligence Impacts on Human Resource Digitalization in Industry 4.0”, Decision Analytics Journal, 7. DOI: 10.1016/j.dajour.2023.100249
Myllylä, M. (2022), “Psychological and Cognitive Challenges in Sustainable AI Design”, in M. Rauterberg (ed.), Culture and Computing: 10th International Conference: Vols. C&C 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings. Issue 13324, 426–444. DOI: 10.1007/978-3-031-05434-1_29
Njoto, S., Cheong, M., Lederman, R., McLoughney, A., Ruppanner, L., & Wirth, A. (2022), “Gender Bias in AI Recruitment Systems: A Sociologicaland Data Sciencebased Case Study”, 2022 IEEE International Symposium on Technology and Society (ISTAS), 1, 1–7. DOI: 10.1109/ISTAS55053.2022.10227106
Nyariro, M., Emami, E., Caidor, P., & Rahimi, S. A. (2023), “Integrating equity, diversity and inclusion throughout the lifecycle of AI within healthcare: a scoping review protocol”, BMJ Open, 13(9). DOI: 10.1136/bmjopen-2023-072069
Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). “Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries”, Frontiers in Big Data, 2(13). DOI: 10.3389/fdata.2019.00013
Oswald, F. L., Behrend, T. S., Putka, D. J., & Sinar, E. (2020), “Big Data in Industrial-Organizational Psychology and Human Resource Management: Forward Progress for Organizational Research and Practice”, Annual Review of Organizational Psychology and Organizational Behavior, 7(1). DOI: 10.1146/annurev-orgpsych-032117-104553
Patrick, H. A., & Kumar, V. R. (2012), “Managing Workplace Diversity: Issues and Challenges”, SAGE Open, 2(2), 1–15. DOI: 10.1177/2158244012444615
Postmes, T., Spears, R., & Cihangir, S. (2001), “Quality of decision making and group norms”, Journal of Personality and Social Psychology, 80(6), 918–930. DOI: 10.1037/0022-3514.80.6.918
Rastogi, C., Zhang, Y., Wei, D., Varshney, K. R., Dhurandhar, A., & Tomsett, R. (2022), “Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making”, Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1–22. DOI: 10.1145/3512930
Rich, A. S., & Gureckis, T. M. (2019), “Lessons for artificial intelligence from the study of natural stupidity”, Nature Machine Intelligence, 1(4), 174–180. DOI: 10.1038/s42256-019-0038-z
Roopaei, M., Horst, J., Klaas, E., Foster, G., Salmon-Stephens, T. J., & Grunow, J. (2021), Women in AI: Barriers and Solutions, IEEE Xplore. DOI: 10.1109/AIIoT52608.2021.9454202
Saheb, T. (2022), “Ethically Contentious Aspects of Artificial Intelligence surveillance: a Social Science Perspective”, AI and Ethics, 2(2), 369–379. DOI: 10.1007/s43681-022-00196-y
Schweyer, A., & Advisor, A. (2018), Predictive Analytics and Artificial Intelligence in People Management Predictive Analytics and Artificial Intelligence in People Management. https://theirf.org/wp-content/uploads/2018/08/2018-ai-study-white-paper-pdf-updated.pdf [15.17.2025.]
Selbst, A. D., Boyd, D., Friedler, S., Venkatasubramanian, S., & Vértesi, J. (2019), “Fairness and Abstraction in Sociotechnical Systems”, FAT*, 19, in: Proceedings of the Conference on Fairness, Accountability, and Transparency Atlanta, GA, USA. ACM, New York, NY, USA, 59–68. DOI: https://doi.org/10.1145/3287560.3287598 [15. 07. 2025.]
Shams, R. A., Zowghi, D., & Bano, M. (2023), “AI and the quest for diversity and inclusion: a systematic literature review”, AI and Ethics, (3). DOI: 10.1007/s43681-023-00362-w
Shen, J., Chanda, A., D’Netto, B., & Monga, M. (2009), “Managing diversity through human resource management: an international perspective and conceptual framework”, The International Journal of Human Resource Management, 20(2), 235–251. DOI: 10.1080/09585190802670516
Shin, D., & Park, Y. J. (2019), “Role of fairness, accountability, and transparency in algorithmic affordance”, Computers in Human Behavior, 98, 277–284. DOI: 10.1016/j.chb.2019.04.019
Singh, E. P., & Doval, J. (2019), “Artificial Intelligence and HR: Remarkable Opportunities, Hesitant Partners”, in: Proceedings of the 4th National HR Conference on Human Resource Management Practices and Trends. https://ssrn.com/abstract=3553448 [15.17.2025.]
Singha, S., & Prakasam, K. (2014), “Exploring the Factors That Facilitate Workforce Diversity Management in ITES Organizations”, International Journal of Management and Humanities (IJMH), (3)1, 11‒15. https://www.ijmh.org/portfolio-item/c0014121314/ [15.17.2025.]
Srinivasan, R., & Chander, A. (2021), “Biases in AI Systems”, Communications of the ACM, 64(8), 44–49. DOI: 10.1145/3464903
Strohmeier, S., & Piazza, F. (2015), “Domain driven data mining in human resource management: A review of current research”, Expert Systems with Applications, 40(7), 2410–2420. DOI: 10.1016/j.eswa.2012.10.059
Suresh, H., & Guttag, J. (2021), “A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle”, in: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–9. DOI: 10.1145/3465416.3483305
Tambe, P., Cappelli, P., & Yakubovich, V. (2019), “Artificial Intelligence in Human Resources Management: Challenges and a Path Forward”, California Management Review, 61(4), 15–42. DOI: 10.1177/0008125619867910
Verma, S., Sharma, R., Deb, S., & Maitra, D. (2021), “Artificial intelligence in marketing: Systematic review and future research direction”, International Journal of Information Management Data Insights, 1(1), DOI: 10.1016/j.jjimei.2020.100002
Walsh, C. G., Chaudhry, B., Dua, P., Goodman, K. W., Kaplan, B., Kavuluru, R., Solomonides, A., & Subbian, V. (2020), “Stigma, biomarkers, and algorithmic bias: recommendations for precision behavioral health with artificial intelligence”, JAMIA Open, 3(1), 9–15. DOI: 10.1093/jamiaopen/ooz054
Webster, C., & Ivanov, S. (2019), “Robotics, Artificial Intelligence, and the Evolving Nature of Work”, in: Digital Transformation in Business and Society, 127–143. DOI: 10.1007/978-3-030-08277-2_8
Weller, A. (2019), “Transparency: Motivations and Challenges”, in: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 23‒40. DOI:10.1007/978-3-030-28954-6_2 [15.17.2025.]
West, D. M. (2018), Future of Work: Robots, AI, and Automation. Rowman & Littlefield.
Yen, C., & Chiang, M.C. (2020), “Trust me, If You can: a Study on the Factors That Influence Consumers’ Purchase Intention Triggered by Chatbots Based on Brain Image Evidence and self-reported Assessments”, Behaviour & Information Technology, 40(11), 1–18. DOI: 10.1080/0144929x.2020.1743362
Zowghi, D., & Rimini, F. (2023), “Diversity and Inclusion in Artificial Intelligence”, AI and Ethics, 3(4), 873‒876. DOI: 10.48550/arxiv.2305.12728
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 The Hungarian Journal of Marketing and Management

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.