New research shows that workers trust AI managers less than human ones. Here’s how organizations can start to bridge the gap.

Authored by: LI Mingyu, T. Bradford BITTERLY

Bad bosses: We’ve all had one. From hateful, vindictive managers to seemingly incompetent supervisors, we’ve all been stuck working for someone whose human emotions and flaws interfered with their ability to manage effectively.

In other words, being human can make managing hard. Could AI do better?

It’s no longer a hypothetical question. Today, algorithms are already replacing countless management functions, and the International Data Corporation predicts that within the next few years, 80% of Forbes Global 2,000 companies will use AI to hire, fire, or train employees. Proponents argue that without the burden of human emotions, AI can manage more efficiently and consistently— but our recent research suggests that it’s not so simple.

To explore the impact of being managed by AI, we conducted a series of studies with more than 800 workers across the U.S. and China. We started by interviewing 20 food delivery workers in Shenzhen, China about their perceptions of AI management. Then, we surveyed 400 additional food delivery drivers in Shenzhen and Linyi: Half were managed solely by an AI algorithm which assigned orders, penalized delays (through fines or account suspensions), and made termination decisions without human involvement, while the other half used the same AI-powered app but also had human managers who held daily check- ins, adjusted unfair AI-generated assignments, and helped riders challenge automated penalties. Finally, we conducted a scenario-based study in which we asked 400 U.S.-based employees to imagine working as delivery drivers for a food delivery platform. All participants were asked to imagine working with the same compensation and performance monitoring structure, but half were told they were working for an AI system, while the other half were told they were working for a human regional manager.

What did we find? Even when the AI and human managers in our studies made exactly the same decisions, workers trusted AI bosses much less than their human equivalents. And trust isn’t just a nice- to-have: Prior research has shown that employees who trust their managers report high job satisfaction, experience greater well- being, take fewer days off, feel a greater sense of organizational commitment, and are willing to put in more effort to go above and beyond for their organizations.

Trust is Built on Ability, Integrity, and Benevolence

So, why do people trust AI bosses less than human ones? Research suggests that trust is driven by three key factors: ability, integrity, and benevolence. When it comes to ability and integrity, AI managers have an undeniable advantage: AI systems are clearly able to coordinate pick-ups and drop-offs more efficiently than human dispatchers, and prior research indicates that employees are less concerned about algorithms being judgmental or using their data for unethical purposes, likely because they know these tools have been programmed to follow ethical guidelines (and won’t fall prey to human foibles such as greed, impatience, or Schadenfreude).

But when it comes to the third piece of the trust puzzle—being perceived as benevolent—AI just doesn’t measure up. Benevolence refers to the quality of caring about others and having their best interests at heart. While AI systems can outperform humans in processing data at scale, enforcing rules consistently, and demonstrating unwavering integrity through algorithmic neutrality, they fundamentally lack the capacity for genuine emotions and empathy.

In other words, ability and integrity are about what you do, but benevolence is about how you seem to feel while you’re doing it. Research shows that employees feel more psychologically safe and more committed to their organizations when their managers demonstrate that they care by authentically expressing emotions such as satisfaction or pride. Similarly, studies have shown that if people appear happy when donating money, they are seen as more benevolent (even if the size of the donation is the same).

Ultimately, how kind someone seems to us is deeply tied to our perception of how authentic their emotional experience is. Though AI has become remarkably adept at recognizing and even mimicking human emotions, this capability is fundamentally different from a genuine emotional experience. An AI boss may be able to recognize and replicate emotional cues, but as long as workers know that it’s AI, they will know that it lacks the lived experience and subjective feeling of emotions that is necessary to come across as benevolent.

This distinction is crucial: Modern AI can convincingly simulate empathy, but it cannot genuinely feel it. As such, even if an algorithm perfectly mirrors human emotional expressions, workers are unlikely to perceive it as actually caring about them— and this lack of perceived benevolence creates a major hurdle to fostering trust.

Not Every Job Requires Benevolence

Of course, not every job requires a benevolent boss. Through a series of additional studies with over 1,500 U.S. employees, we found that how much workers trust AI management depends on how much empathy a situation demands.

In situations where employees seek emotional support, such as crises, personal distress, or moments in which their sense of agency feels threatened, they report a greater need for empathetic engagement. In these situations, workers really want their manager to be able to authentically share their emotional state. As a result, they strongly prefer a human manager to an algorithm—even if the actions that the manager takes are exactly the same.

Importantly, our findings demonstrate that people don’t mistrust AI because it actually acts in ways that are less benevolent than humans, but simply because it is perceived as less benevolent. In our studies, we were able to hold managers’ actions and communications constant, enabling us to measure the impact of just knowing that the same exact email or policy decision is coming from an AI versus a human.

Conversely, in contexts that require more practical problem- solving than empathy (e.g., coordinating routine delivery orders or processing typical time-off requests), an AI manager’s lack of perceived benevolence is less of an issue. In such an environment, AI systems can reach comparable trust levels as their human counterparts simply by demonstrating reliability and competence.

That said, even if a specific situation demands less empathy, we found that if you ask employees whether they would generally prefer a manager who is empathetic or one who is not, they (unsurprisingly) say that they would prefer the empathetic one. In other words, while there are some cases in which an AI boss may still be able to foster trust, people generally still tend to prefer a human.

Bridging the AI Trust Gap

So, what does this mean for leaders? The answer isn’t to abandon AI entirely. After all, when leveraged effectively, these tools can provide incredible value to workers and employers alike. But to start to bridge the trust gap between AI and human management, there are several steps organizations should take:

1 Use AI for Execution, and Humans for Empathy

Our research demonstrates that it’s best to use human managers for tasks that require empathy, but AI bosses can work well for more execution-focused work. For example, AI can optimize schedules by analyzing calendars, meeting patterns, and resource availability, or use real-time data to identify workflow bottlenecks and improve productivity. In contrast, organizations should stick to human managers for tasks such as coaching employees through personalized career development plans, mediating conflicts by interpreting nonverbal cues and cultural nuances, and providing support during crises and other emotional, high-pressure situations.

2 Create Feedback Loops Between AI and Human Management

Another important way to foster trust is to build feedback loops in which AI and human bosses provide continuous input on each other’s work. In this way, organizations can leverage both AI’s strengths in processing large volumes of data and identifying patterns and human strengths in contextual understanding and emotional intelligence.

For example, IBM uses its Watson analytics platform to analyze employees’ project completion rates, collaboration patterns, goal achievement metrics, and even internal training data to generate detailed reports. Then, human managers review these reports, adding their own observations about each employee’s growth, challenges, and qualitative contributions. This collaborative approach to performance reviews has helped IBM increase employee engagement and satisfaction by offering clearer insight into growth opportunities. Moreover, creating feedback loops like these also helps refine AI systems over time, as human insights are fed back into the algorithm to improve its future analyses.

3 Build Transparency into AI Systems

While perceptions of benevolence contribute substantially to fostering trust, employees are also more likely to trust a decision if they understand how and why it was made. As such, organizations can provide greater transparency by communicating what the AI is being used for and, when feasible, having the AI explain its reasoning to employees. This builds trust by both giving users visibility into the logic underlying the AI’s recommendations and helping them understand its limitations (and thus set more realistic expectations).

In addition, if AI management is complemented by human support, organizations should be transparent about the human involvement, too. For example, in our field study, we found that delivery drivers were more likely to trust their joint AI-human management system when the human input was very visible (e.g., when human managers held morning meetings with employees, or publicly intervened when the AI incorrectly penalized employees). In other words, just having a human in the loop isn’t enough—it’s also important that employees see their human managers provide insights and challenge AI recommendations.

4 Empower Employees to Influence AI Management

Incorporating feedback from human managers is step one. But of course, managers aren’t the only people with valuable insights and experiences that can contribute to improving AI systems.

By enabling employees (not just managers) to provide feedback, ask questions, and challenge decisions, organizations can help AI systems learn to meet their employees’ needs and preferences more effectively. For example, features such as asking employees to rate AI suggestions or allowing employees to customize certain aspects of the AI’s behavior to better suit their individual needs can give workers a greater sense of control, boosting their trust in these systems.

5 Pilot Test for Trust

Whenever implementing any new technology or process, it’s always important to start with small-scale pilot tests before moving forward with a larger rollout. When it comes to AI management tools, this is doubly true: On the one hand, organizations should of course test the technical effectiveness and efficiency of the AI system before launching it at scale. But beyond these technical metrics, leaders should also assess employees’ trust in the system—and adapt accordingly if initial tests demonstrate a major trust gap.

For example, as professors, we were initially excited about using AI to grade student essays. However, initial pilot tests indicated that even if AI could help us grade more accurately and efficiently, our students (understandably) wanted to feel like the person evaluating the essays they had put so much thought and effort into truly cared about them. No matter how well the AI performed, our students simply didn’t trust it as much as they trusted a real, human reader. Piloting an AI tool with just a few students helped us learn this important lesson and adapt our plans, rather than discovering the negative impact on students after rolling out an AI grading system for an entire class.

As AI continues to advance—and as we all grow more accustomed to interacting with human-like AI tools—it is possible that one day, we will view these systems as having emotional capabilities on par with human managers. But until that day, our research demonstrates that even if AI acts and speaks exactly the same as a human, it is perceived as less benevolent and thus less trustworthy. In this reality, the organizations that thrive will be those that recognize a fundamental truth: People follow leaders, not machines. As such, AI should be designed to enable humanity, not to erase it.

Li Mingyu is a PhD student at HKUST Department of Management, specializing in organizational behavior. T. Bradford Bitterly is an assistant professor of management at HKUST, focusing in topics such as negotiation, power and status, trust, and communication.