New research shows that people are more likely to accept bad news when the decision is made by AI than when it is made by a human.

Authored by Melody M. CHAO, Jungmin CHOI

Picture this: After months of working toward a big promotion, your manager puts a meeting on the calendar. This is it — you’re sure you’re getting the new role. You step into her office, close the door, and then you find out: You didn’t get it.

You know the decision was based on factors like performance metrics, skill alignment, and peer feedback. Still, it feels unfair. You might start questioning your manager’s judgment, or wondering whether office politics played a role. You dwell on the decision, maybe even challenge it, and ultimately feel angry and demotivated.

Now, picture the same scenario, but this time, you learn that an AI agent made the decision using those same performance metrics, skill indicators, and peer feedback. Would you feel differently?

You would likely feel just as disappointed, but our recent research reveals a surprising reality: When a negative decision is made by AI, people are significantly more likely to accept it as fair, and far less inclined to question or appeal it.

This is especially surprising given how much media attention has focused on AI bias, from widely-publicized cases of AI hiring tools discriminating based on race to accusations of unfair terminations or unreasonable labor practices by algorithmic managers. Nevertheless, across a series of studies, we found that people perceive AI as less emotional and therefore more impartial than humans, leading them to accept negative judgments from AI agents much more readily than from human decision-makers.

Bad News Seems Fairer When It Comes From AI

Specifically, we conducted six experiments with more than 2,500 participants, including business school students in Asia as well as working adults from the U.S. and around the world. Participants imagined themselves working as an intern or employee involved in a workplace dispute: A colleague took credit for work that they did, or they had unintentionally caused a workplace accident, and a bonus or promotion was at stake. They were then told that either a human manager or an AI system had made a decision either in their favor or against them, and were asked how they would react.

What did we find? Across scenarios, when the outcome was favorable, participants felt that the decision was fair, regardless of whether it came from a human or AI. But when the outcome was unfavorable, participants consistently believed that the decision was fairer if it came from AI, which in turn made them more willing to accept the unfavorable decision and continue to engage productively on future tasks.

Interestingly, our research also revealed a way to short-circuit this effect. When we reminded participants that AI algorithms often replicate human biases, they no longer believed that the AI’s judgment was impartial. As a result, they reacted to negative outcomes in the same way regardless of whether the decision came from a human or an AI system.

Making the Most of AI at Work

AI systems can add real value across many professional contexts. But they are still far from perfect and the assumption that they are inherently fairer than human decision-makers can carry substantial risks. As such, our findings suggest several practical takeaways for leaders aiming to leverage AI safely and effectively in the workplace:

1. Always keep a human in the loop.

As people tend to assume AI is unemotional and rule-bound, organizations may underinvest in auditing once an AI system is launched. In other words, although it is generally obvious that human decision-makers must be subject to audits and other forms of checks and balances to ensure fairness, the need for oversight is often less intuitive when the decision-makers are AI systems that “feel” fairer. To bridge the gap between “it feels fair” and actual fairness, leaders should accompany any AI implementation with rigorous pre-deployment testing and ongoing monitoring.

This means creating clear governance systems that ensure accountability for AI-driven decisions and formalize the role of human managers. Leadership teams should define which decisions (or aspects of decisions) will be automated, and specify when humans must review, intervene, or override algorithmic decisions. Most importantly, leaders shouldn’t wait until problems arise. Instead, they should proactively and regularly audit AI systems for potential bias, be transparent about the results, and communicate how they will mitigate any issues they discover.

2. Leverage the advantages of AI ethically.

The fact that people may be more willing to accept bad news if the decision was made by AI suggests that in some cases, AI can reduce backlash against unpopular decisions. However, this AI advantage must be used ethically.

As much as possible, automated decisions should be coupled with clear explanations, formal appeal channels, and human review. After all, AI is not immune to bias, but our research suggests that people may be less aware of AI bias than human bias. That makes it especially important not to use this effect to “nudge” people into accepting unfair outcomes.

3. Don’t let your own AI bias obscure problems.

Bias doesn’t discriminate. Senior leaders are just as vulnerable as their junior staff to the assumption that AI is fairer than it is. As a leader, you need to recognize your own tendency to assume AI objectivity, and take steps to ensure that this assumption doesn’t obscure organizational problems.

Even if basic metrics suggest an AI system is performing fairly, deeper analyses may reveal hidden inequities or unintended consequences. By tracking not only the predictive performance or accuracy of an AI tool but also downstream effects, such as employee turnover, applicant withdrawal, complaint rates, appeals, and other behavioral metrics, leaders can detect and address subtle issues before they escalate.

4. Demystify AI.

Finally, our finding shows that simply informing people about potential AI bias reduces blind trust in AI fairness. This highlights how powerful education can be. Employees who are unaware of AI’s limitations may be more likely to accept unfavorable outcomes, which may make them easier to manage in the short term. But ignorance is not bliss. It is the responsibility of managers and leaders to empower their teams by helping employees understand how AI works, why it can be biased, and what safeguards are needed.

To that end, organizations can offer “algorithmic literacy” trainings to help employees and executives alike understand that AI is not magic. These training programs should explain the core principle of garbage in, garbage out: Biased input leads to biased output. In the context of performance appraisals, leaders can acknowledge that while AI is a powerful tool to help managers analyze performance data and peer feedback, it can only process what it has been trained to see. When people understand that an AI system can inherit human fallibility, they become more discerning users of these tools.

At the same time, communication must be handled carefully. Transparency builds trust, but excessive technical detail about AI can confuse, while shallow platitudes create false confidence. Effective communicators strike a balance by explaining key information in plain language, providing relevant details such as human oversight mechanisms and appeal channels without overwhelming people with details.

5. Using AI for Good: Shifting From Blind Acceptance to Informed Partnership.

AI tools have the potential to add tremendous value to today’s workplaces. They can boost efficiency, facilitate decision-making, and in many cases increase fairness. But the belief that AI is an unimpeachable, objective judge is a dangerous fantasy.

Our research shows that when people don’t know any better, they tend to assume AI makes fairer decisions than humans. Despite widespread narratives in the media suggesting that disgruntled employees may rebel against AI systems, our data suggests the opposite: the primary risk is not rebellion, but resignation. Leaders must therefore take proactive steps to break down the halo of perceived fairness that blinds so many of us to the imperfections of AI, designing systems that leverage AI’s analytical power while ensuring human accountability. Only then can we move from blind acceptance of AI decisions to informed, responsible partnerships between humans and algorithms.

Melody M. Chao is an associate professor of management at HKUST. Her research focuses on organizational behavior, mindsets, normative judgments, group processes, and intercultural relations. Jungmin Choi received her Ph.D. at HKUST and is now an assistant research professor at the University of Cambridge. Her research focuses on understanding organizational behavior in the context of technology integration at work.

This article draws on the research paper “For Me or Against Me? Reactions to AI (vs. Human) Decisions That Are Favorable or Unfavorable to the Self and the Role of Fairness Perception,” published in Personality and Social Psychology Bulletin, authored by Melody M. Chao and Jungmin Choi.