Why Middle Managers Are the Real AI Adoption Channel (And What to Do About It)

Maverick Foo
Tuesday, 5th May 2026

Most organizations think about AI adoption in terms of tools, policies, and leadership announcements. A new platform gets rolled out. A town hall explains the vision. An email lands with usage guidelines.

But when uncertainty rises, employees rarely turn to any of those things. They turn to the person closest to them in the org chart: their manager.

This is the insight that caught my attention in the Randstad Workmonitor 2026 report, which dedicates an entire section to the role of managers during periods of change. The data is striking, and it reframes where AI adoption actually happens.

 

The Data: Managers as the Trust Conduit

Randstad’s global research paints a clear picture of how much weight employees place on their direct manager.

72% of talent say they have a strong relationship with their manager, up 8 percentage points from last year. 63% feel more connected to their manager than to the company as a whole. And when volatility rises, 60% seek more reassurance from their manager.

For leaders and L&D professionals working on AI enablement, this has a direct implication: middle management is the real adoption channel. Not the CEO town hall, the policy memo, or the new tool rollout.

When people feel uncertainty about AI, they do not ask, “What did leadership say?” They ask, quietly, “What does my manager think?”

 

The Tension: The Conduit Is Getting Bypassed

The same Randstad research surfaces a tension that most organizations have not yet confronted.

Half of talent now use AI for work advice instead of consulting their manager. And 55% avoid raising issues with their manager due to job insecurity. So while managers are the trust conduit, that conduit is also being bypassed.

This is not a manager problem. It is an organizational design problem. When managers are not equipped to have meaningful conversations about AI, employees find their own answers elsewhere. The result is uncoordinated adoption, inconsistent standards, and a growing gap between what leadership envisions and what actually happens on the ground.

A 2025 BearingPoint study reinforces this. It found that 43% of standard managerial tasks are already impacted by generative AI, with 19% augmented and 24% automated. Yet only 35% of companies have structured change-management programs for AI adoption. Most managers are, quite literally, figuring it out alone.

McKinsey’s research on middle managers and generative AI adds another layer. Less than 30% of managers’ time is currently spent on people leadership. The majority goes to individual execution and admin, much of which AI can automate. The opportunity is clear: free managers from low-value coordination work so they can focus on the trust-building and coaching that actually drives adoption. But that shift does not happen on its own.

 

Why This Matters More Than Most Organizations Realize

There is a growing body of evidence that middle managers are decisive in digital and AI transformations. Academic reviews consistently describe them as change agents, digital facilitators, and innovation promoters who bridge top-management strategy and frontline practice.

At the same time, manager engagement is declining. Gallup data show global manager engagement dropped from 30% to 27% in a single year, with steeper declines among younger managers. This is happening at exactly the layer where AI transformation pressure is highest.

The contradiction is real: organizations need more from their middle managers precisely when those managers are receiving less support and experiencing more strain.

 

A Practical Starting Point: The CALM Framework

If you want AI adoption that scales, treat middle managers as adoption infrastructure, not as an afterthought.

A practical starting point is four conversations every manager should have with their team. I call it the CALM framework:

Clarity

What is changing, and what is not. People fear the unknown more than the change itself. Managers who are upfront about what AI means for the team’s work reduce anxiety and build credibility.

Aspiration

What good looks like on this team. This is about setting a visible standard for effective AI use, in the team’s specific context. Not abstract AI literacy, but concrete examples that people can point to.

Limits

What is allowed, what is not, and where to ask. Guardrails make experimentation feel safe. When people know the boundaries, they are more willing to try new things.

Mistakes

How we handle them, and how we learn fast. Experimentation requires psychological safety. Managers who normalize that AI mistakes will happen, and frame them as learning, create teams that actually adopt.

Start with one. Clarity is usually the easiest entry point. One honest conversation about what is changing can unlock the rest.

Implications for Leaders and L&D

  • Treat middle manager enablement as adoption infrastructure, not an optional add-on. If your managers cannot lead AI conversations, your rollout will stall at the team level regardless of the tools you deploy.
  • Move beyond generic AI literacy programs. Managers need role-specific guidance on leading AI-augmented teams: workflow redesign, coaching through uncertainty, and practical governance.
  • Monitor manager engagement as a leading indicator of AI adoption health. Declining engagement at the manager layer is an early warning sign that transformation pressure is outpacing support.

Try This This Week

  • Pick one conversation from the CALM framework and have it with your team before Friday. Clarity is the easiest place to start: simply tell your team what is changing with AI in your function, and what is staying the same.
  • Ask your managers, directly, whether they have received structured guidance on leading their teams through AI adoption. The answer will tell you how much of your adoption strategy is actually reaching the ground.
  • Use the Radiant Institute Team AI Effectiveness Scorecard to get a quick snapshot of how your team is using AI across 7 Drivers. For this topic, pay close attention to the Scalability driver, which measures how well AI ways of working spread beyond individuals into the wider team. If Scalability scores low, it often means managers have not yet built the conversations and habits that turn individual experiments into team-wide practice.

Ending thought:

AI adoption does not scale through announcements. It scales through conversations. And the people best positioned to have those conversations are the ones sitting between strategy and execution: your middle managers.

The question is whether those managers are equipped to be the conduit, or whether they are being asked to “make it happen” with no playbook.

If managers are improvising, your culture fragments. If managers are equipped, adoption compounds.

At Radiant Institute, we work with organizations across Malaysia and APAC to build structured AI enablement programs that reach the manager layer, where adoption actually lives. If this resonates, we would welcome a conversation about how to equip your managers for what comes next.

Maverick Foo

Maverick Foo

Lead Consultant, AI-Enabler, Sales & Marketing Strategist

Partnering with L&D & Training Professionals to Infuse AI into their People Development Initiatives 🏅Award-Winning Marketing Strategy Consultant & Trainer 🎙️2X TEDx Keynote Speaker ☕️ Cafe Hopper 🐕 Stray Lover 🐈

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Share this
Send this to a friend