HKUST Business Review
icture this: After months of working toward a big promotion, your manager puts a meeting on the calendar. This is it — you’re sure you’re getting the new role. You step into her office, close the door, and then you find out: You didn’t get it. You know the decision was based on factors like performance metrics, skill alignment, and peer feedback. Still, it feels unfair. You might start questioning your manager’s judgment, or wondering whether office politics played a role. You dwell on the decision, maybe even challenge it, and ultimately feel angry and demotivated. Now, picture the same scenario, but this time, you learn that an AI agent made the decision using those same performance metrics, skill indicators, and peer feedback. Would you feel differently? You would likely feel just as disappointed, but our recent research reveals a surprising reality: When a negative decision is made by AI, people are significantly more likely to accept it as fair, and far less inclined to question or appeal it. This is especially surprising given how much media attention has focused on AI bias, from widely-publicized cases of AI hiring tools discriminating based on race to accusations of unfair terminations or unreasonable labor practices by algorithmic managers. Nevertheless, across a series of studies, we found that people perceive AI as less emotional and therefore more impartial than humans, leading them to accept negative judgments from AI agents much more readily than from human decision-makers. 27 HKUST Business Review Bad News Seems Fairer When It Comes From AI Specifically, we conducted six experiments with more than 2,500 participants, including business school students in Asia as well as working adults from the U.S. and around the world. Participants imagined themselves working as an intern or employee involved in a workplace dispute: A colleague took credit for work that they did, or they had unintentionally caused a workplace accident, and a bonus or promotion was at stake. They were then told that either a human manager or an AI system had made a decision either in their favor or against them, and were asked how they would react. What did we find? Across scenarios, when the outcome was favorable, participants felt that the decision was fair, regardless of whether it came from a human or AI. But when the outcome was unfavorable, participants consistently believed that the decision was fairer if it came from AI, which in turn made them more willing to accept the unfavorable decision and continue to engage productively on future tasks.
Made with FlippingBook
RkJQdWJsaXNoZXIy MzUzMDg=