HKUST Business Review

4 5 Melody M. Chao is an associate professor of management at HKUST. Her research focuses on organizational behavior, mindsets, normative judgments, group processes, and intercultural relations. Jungmin Choi received her Ph.D. at HKUST and is now an assistant research professor at the University of Cambridge. Her research focuses on understanding organizational behavior in the context of technology integration at work. This article draws on the research paper “For Me or Against Me? Reactions to AI (vs. Human) Decisions That Are Favorable or Unfavorable to the Self and the Role of Fairness Perception,” published in Personality and Social Psychology Bulletin , authored by Melody M. Chao and Jungmin Choi. 29 HKUST Business Review Even if basic metrics suggest an AI system is performing fairly, deeper analyses may reveal hidden inequities or unintended consequences. By tracking not only the predictive performance or accuracy of an AI tool but also downstream effects, such as employee turnover, applicant withdrawal, complaint rates, appeals, and other behavioral metrics, leaders can detect and address subtle issues before they escalate. Demystify AI. Finally, our finding shows that simply informing people about potential AI bias reduces blind trust in AI fairness. This highlights how powerful education can be. Employees who are unaware of AI’s limitations may be more likely to accept unfavorable outcomes, which may make them easier to manage in the short term. But ignorance is not bliss. It is the responsibility of managers and leaders to empower their teams by helping employees understand how AI works, why it can be biased, and what safeguards are needed. To that end, organizations can offer “algorithmic literacy” trainings to help employees and executives alike understand that AI is not magic. These training programs should explain the core principle of garbage in, garbage out: Biased input leads to biased output. In the context of performance appraisals, leaders can acknowledge that while AI is a powerful tool to help managers analyze performance data and peer feedback, it can only process what it has been trained to see. When people understand that an AI system can inherit human fallibility, they become more discerning users of these tools. At the same time, communication must be handled carefully. Transparency builds trust, but excessive technical detail about AI can confuse, while shallow platitudes create false confidence. Effective communicators strike a balance by explaining key information in plain language, providing relevant details such as human oversight mechanisms and appeal channels without overwhelming people with details. Leadership teams should define which decisions will be automated, and specify when humans must review, intervene, or override algorithmic decisions. Using AI for Good: Shifting From Blind Acceptance to Informed Partnership. AI tools have the potential to add tremendous value to today’s workplaces. They can boost efficiency, facilitate decision-making, and in many cases increase fairness. But the belief that AI is an unimpeachable, objective judge is a dangerous fantasy. Our research shows that when people don’t know any better, they tend to assume AI makes fairer decisions than humans. Despite widespread narratives in the media suggesting that disgruntled employees may rebel against AI systems, our data suggests the opposite: the primary risk is not rebellion, but resignation. Leaders must therefore take proactive steps to break down the halo of perceived fairness that blinds so many of us to the imperfections of AI, designing systems that leverage AI’s analytical power while ensuring human accountability. Only then can we move from blind acceptance of AI decisions to informed, responsible partnerships between humans and algorithms.

RkJQdWJsaXNoZXIy MzUzMDg=