HKUST Business Review

Build Transparency into AI Systems While perceptions of benevolence contribute substantially to fostering trust, employees are also more likely to trust a decision if they understand how and why it was made. As such, organizations can provide greater transparency by communicating what the AI is being used for and, when feasible, having the AI explain its reasoning to employees. This builds trust by both giving users visibility into the logic underlying the AI’s recommendations and helping them understand its limitations (and thus set more realistic expectations). In addition, if AI management is complemented by human support, organizations should be transparent about the human involvement, too. For example, in our field study, we found that delivery drivers were more likely to trust their joint AI-human management system when the human input was very visible (e.g., when human managers held morning meetings with employees, or publicly intervened when the AI incorrectly penalized employees). In other words, just having a human in the loop isn’t enough—it’s also important that employees see their human managers provide insights and challenge AI recommendations. Empower Employees to Influence AI Management Incorporating feedback from human managers is step one. But of course, managers aren’t the only people with valuable insights and experiences that can contribute to improving AI systems. By enabling employees (not just managers) to provide feedback, ask questions, and challenge decisions, organizations can help AI systems learn to meet their employees’ needs and preferences more effectively. For example, features such as asking employees to rate AI suggestions or allowing employees to customize certain aspects of the AI’s behavior to better suit their individual needs can give workers a greater sense of control, boosting their trust in these systems. 4 3 Pilot Test for Trust Whenever implementing any new technology or process, it’s always important to start with small-scale pilot tests before moving forward with a larger rollout. When it comes to AI management tools, this is doubly true: On the one hand, organizations should of course test the technical effectiveness and efficiency of the AI system before launching it at scale. But beyond these technical metrics, leaders should also assess employees’ trust in the system—and adapt accordingly if initial tests demonstrate a major trust gap. For example, as professors, we were initially excited about using AI to grade student essays. However, initial pilot tests indicated that even if AI could help us grade more accurately and efficiently, our students (understandably) wanted to feel like the person evaluating the essays they had put so much thought and effort into truly cared about them. No matter how well the AI performed, our students simply didn’t trust it as much as they trusted a real, human reader. Piloting an AI tool with just a few students helped us learn this important lesson and adapt our plans, rather than discovering the negative impact on students after rolling out an AI grading system for an entire class. As AI continues to advance—and as we all grow more accustomed to interacting with human-like AI tools—it is possible that one day, we will view these systems as having emotional capabilities on par with human managers. But until that day, our research demonstrates that even if AI acts and speaks exactly the same as a human, it is perceived as less benevolent and thus less trustworthy. In this reality, the organizations that thrive will be those that recognize a fundamental truth: People follow leaders, not machines. As such, AI should be designed to enable humanity… not to erase it. 5 Li Mingyu is a PhD student at HKUST Department of Management, specializing in organizational behavior. T. Bradford Bitterly is an assistant professor of management at HKUST, focusing in topics such as negotiation, power and status, trust, and communication. Theoretical Model Empathy Demand Benevolence Trust AI (vs. Human) Management + + - - 7 HKUST Business Review

RkJQdWJsaXNoZXIy MzUzMDg=