AI in HR: Why the Gap Between Capability and Reality Should Change Your Strategy

Maverick Foo
Sunday, 15th March 2026

Every week, a new AI tool promises to change everything. And every week, someone in a meeting says: “This will replace half your team.”

The narrative around AI replacing jobs is loud. But what if the data tells a different story?

 

The Hammer Problem

When a tool becomes powerful and exciting, every problem starts to look like a nail. That is the classic hammer problem, and it is playing out across AI in HR and workforce strategy right now.

The biggest mistake is confusing an impressive demo with an imminent labour market shift. Demos are designed to impress. Strategy needs to be designed around evidence.

Anthropic’s new report, Labor Market Impacts of AI, offers exactly that kind of evidence. And it challenges some widely held assumptions about AI replacing jobs.

 

What Anthropic’s Data Actually Shows

Anthropic’s researchers compared what AI could do at work versus what it is doing. The gap is massive.

Take Computer and Math roles. Theoretical AI capability sits at 94%. Actual observed coverage? Just 33%.

AI is not yet replacing jobs at the pace most people assume. And this pattern repeats across occupations.

The report introduces a concept called Observed Exposure, a measure that goes beyond most AI assessments. Instead of asking “Can AI do this task?”, it asks: “Is AI actually doing this task, in work settings, in automated ways?”

That distinction matters enormously for anyone making workforce decisions.

 

Why This Matters for HR and Workforce Strategy

Most AI strategies are built on potential, not reality. The cost of getting this wrong is not just a failed rollout. It is a misaligned workforce strategy.

Anthropic’s data shows no clear rise in unemployment in the most exposed occupations since late 2022. But something else is showing up: a subtle slowdown in hiring for younger workers. A narrowing entry gate rather than mass displacement.

That is an AI succession planning story. If entry-level hiring slows while senior roles remain stable, organisations risk hollowing out their talent pipeline without realising it.

For HR and L&D leaders, this means your workforce planning assumptions may be built on the wrong data. If your strategy is anchored to theoretical capability rather than observed adoption, you could be solving for a future that has not arrived, while missing the shifts already underway.

 

The Better Question: Observed Exposure

Most AI capability measures ask: “Can AI do this task?” Anthropic’s Observed Exposure metric asks something far more useful: “Is AI actually doing this task, in work settings, in automated ways?”

The difference between these two questions is where strategy lives. Theoretical capability tells you what is possible. Observed exposure tells you what is happening. Leaders who build plans on the first without checking the second end up over-investing in automation that their teams and workflows are not ready to absorb.

 

Three Filters to Overcome AI Adoption Challenges

Before making AI-driven workforce decisions, run every use case through three filters:

  1. Fit: Does the task match what AI handles well today? Not what it might handle next year, but what it reliably does now.
  2. Friction: Are the workflow, legal, and trust conditions in place? A task might be technically suitable for AI, but if compliance, data privacy, or team trust are not addressed, adoption will stall. This is where most AI adoption challenges live.
  3. Follow-through: Are your people being developed to verify, challenge, and improve on AI outputs? AI assistance without human verification introduces quality risk that can offset any productivity gains.

In practice, most organisations have a decent handle on Fit. It is Friction and Follow-through where the real work lives.

Implications for Leaders and L&D

  • Audit your AI strategy assumptions: are they built on observed adoption data, or on theoretical capability scores?
  • Watch for the “narrowing gate” effect in your hiring pipeline, especially for early-career roles
  • Shift workforce planning conversations from “What can AI do?” to “What is AI actually doing in our context?”

Try This This Week

  • Pick one AI use case your team is planning and score it against all three filters: Fit, Friction, and Follow-through
  • Review your most recent AI training programme and ask whether it addressed workflow integration or only awareness
  • Read Anthropic’s Labor market impacts of AI report and compare their observed exposure data against your own assumptions

Ending thought:

The future of work with AI will not be shaped by what the technology can do in theory. It will be shaped by how thoughtfully organisations close the gap between capability and reality. Knowing the answer starts with knowing where your team actually stands.

If you are not sure where to begin, the Team AI Readiness Scorecard can help you benchmark your team’s current position across the drivers that matter most.

Maverick Foo

Maverick Foo

AI Enablement Strategist for L&D

We help companies to Work Faster, Think Sharper & Learn Smarter with AI 🤖 AI-Infused Training Programs 🏅Award-Winning Consultant & Trainer 🎙️3X TEDx Keynote Speaker & Panel Moderator ☕️ Cafe Hopper 🐕 Stray Lover 🐈

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Share this
Send this to a friend