HKUST Business Review

This means creating clear governance systems that ensure accountability for AI-driven decisions and formalize the role of human managers. Leadership teams should define which decisions (or aspects of decisions) will be automated, and specify when humans must review, intervene, or override algorithmic decisions. Most importantly, leaders shouldn’t wait until problems arise. Instead, they should proactively and regularly audit AI systems for potential bias, be transparent about the results, and communicate how they will mitigate any issues they discover. Leverage the advantages of AI ethically. The fact that people may be more willing to accept bad news if the decision was made by AI suggests that in some cases, AI can reduce backlash against unpopular decisions. However, this AI advantage must be used ethically. As much as possible, automated decisions should be coupled with clear explanations, formal appeal channels, and human review. After all, AI is not immune to bias, but our research suggests that people may be less aware of AI bias than human bias. That makes it especially important not to use this effect to “nudge” people into accepting unfair outcomes. Don’t let your own AI bias obscure problems. Bias doesn’t discriminate. Senior leaders are just as vulnerable as their junior staff to the assumption that AI is fairer than it is. As a leader, you need to recognize your own tendency to assume AI objectivity, and take steps to ensure that this assumption doesn’t obscure organizational problems. 28 HKUST Business Review Insight Good news always seems fair — but bad news seems fairer when it comes from AI. Interestingly, our research also revealed a way to short-circuit this effect. When we reminded participants that AI algorithms often replicate human biases, they no longer believed that the AI’s judgment was impartial. As a result, they reacted to negative outcomes in the same way regardless of whether the decision came from a human or an AI system. Making the Most of AI at Work AI systems can add real value across many professional contexts. But they are still far from perfect and the assumption that they are inherently fairer than human decision-makers can carry substantial risks. As such, our findings suggest several practical takeaways for leaders aiming to leverage AI safely and effectively in the workplace: Always keep a human in the loop. As people tend to assume AI is unemotional and rule-bound, organizations may underinvest in auditing once an AI system is launched. In other words, although it is generally obvious that human decision-makers must be subject to audits and other forms of checks and balances to ensure fairness, the need for oversight is often less intuitive when the decision-makers are AI systems that “feel” fairer. To bridge the gap between “it feels fair” and actual fairness, leaders should accompany any AI implementation with rigorous pre-deployment testing and ongoing monitoring. When the outcome was unfavorable, participants consistently believed that the decision was fairer if it came from AI. 2 3 4 5 6 7 1 Favorable HR Manager AI System Unfavorable 2 3 1

RkJQdWJsaXNoZXIy MzUzMDg=