AI Productivity Without Burnout: Why AI Can Intensify Work and What to Do About It

Maverick Foo
Sunday, 22nd February 2026

🚨 Bad news.

AI doesn’t reduce work, it intensifies it.

That’s the argument from a Harvard Business Review piece published in February 2026, based on an eight-month study inside a ~200-person U.S. tech company. The researchers observed work habits, tracked internal communications, and ran 40+ interviews across functions.

If you are leading a team trying to improve AI productivity, this finding matters. Not because AI does not save time, it often does. The issue is what happens next.

Why “AI saves time” and “AI makes people busier” can both be true

The disagreement often comes from mixing different definitions of work:

  • Task-level: time-on-task, output per hour, quality per unit time

  • Day-level: total hours, after-hours activity, recovery breaks, pace, fragmentation

  • Role-level: scope creep and task expansion, “shadow headcount” absorbed through self-service

  • System-level: coordination, review, compliance, rework burdens, rising norms for responsiveness

A simple way to reconcile it is to treat generative AI as a capacity shock. It reduces the cost of producing drafts, code, analysis, and options. What happens next depends on whether leaders convert that capacity into slack (less work, more recovery, better decisions), or convert it into throughput (more tasks, more channels, faster cycles, higher targets). Many organizations default to throughput unless norms and incentives are redesigned.

AI productivity is real, but it is not the same as workload relief

Multiple studies show meaningful task-level gains.

In mid-level writing tasks, access to generative AI reduced time and increased quality. In customer support, large deployments show average productivity gains, with much larger benefits for novices and smaller, sometimes negative effects for top performers. In consulting tasks, research suggests a “jagged frontier”: inside the frontier, faster and better work; outside it, confident errors can rise.

So yes, AI can make teams faster. The problem is that speed often turns into more attempted work, more parallel threads, and more pressure, unless you design for sustainability.

3 ways AI intensifies work (and why it feels invisible at first)

1) Task expansion (scope drift)

AI fills knowledge gaps and lowers the barrier to “just trying,” so people take on work that used to sit with other roles. Over time, that expands job scope, and can even absorb work that would have justified extra headcount.

This shows up as a manager polishing deck copy “because AI makes it quick,” a marketer doing first-pass analytics “because AI can interpret tables,” or a project lead drafting product comms “because the model can mimic the brand voice.” Each move feels small. Over a quarter, the role quietly grows.

2) Blurred boundaries (time spillover)

Because prompting feels lightweight, people slip “quick prompts” into lunch, meetings, and those tiny gaps in the day. It rarely feels like more work, but it reduces recovery.

This is happening inside a broader trend of boundary erosion in knowledge work. Early starts, late meetings, and constant pings already exist in many organizations. AI adoption lands inside this environment, and it can accelerate the habit of turning micro-gaps into micro-work.

3) More multitasking (attention fragmentation)

AI creates a new rhythm: multiple drafts, parallel threads, checking outputs, switching contexts. It feels productive, but it increases cognitive load and the sense of always juggling.

You get more iterations. You get more options. You also get more decisions. Over time, the hidden tax is attention.

A fourth intensifier leaders often miss: the rework tax

Even when AI saves time on drafting, a meaningful portion of that time can be lost to verification and fixing.

This is the hidden labor that makes AI feel “always on,” even when output is higher.

We can’t realistically stop AI use, so we need an AI practice

Once we’re “addicted” to the calculator, it’s hard to go back to mental arithmetic.

So the real response is to shape how AI is used. HBR’s recommended direction is to build an AI practice: simple norms that stop workload expansion from becoming the default.

Three practical starters.

1) Intentional pauses

Before major decisions are made, require 1 counterargument and 1 explicit link to the goal.

This does two things.

First, it slows “AI momentum,” where the best-written option wins. Second, it protects against jagged-frontier failures, where AI is strong in some zones and confidently wrong outside them, which creates downstream correction cost.

Example: If an AI-generated plan looks great, the team still needs to answer, “What is the strongest reason this could fail in our context?” and “Which goal does this directly support?”

2) Sequencing

Batch non-urgent AI outputs into set review windows, reduce always-on responsiveness.

If AI makes drafts cheap, review can become the bottleneck. Without sequencing, teams live in perpetual “almost done,” constantly revising and rechecking.

Example: “AI-generated drafts go into a shared folder by 3pm, we review 4–4:30pm, and we decide what ships.”

3) Human grounding

Protect time for real dialogue so teams do not drift into solo AI tunnel vision.

AI is excellent at helping individuals move fast. Teams still need shared understanding. A simple norm is a weekly 20–30 minute “assumptions check” where people compare what they asked AI to do, what they accepted, and what they rejected.

The leadership trap: when productivity gains trigger target ratcheting

Even when AI produces genuine efficiency, organizations can reclaim the productivity dividend by raising targets.

This converts efficiency into higher expected throughput rather than workload relief. It is one of the main reasons “AI productivity” needs work norms, not just better prompts.

Where this fits in Radiant Institute’s 7 Drivers of AI Effectiveness

In Radiant Institute’s 7 Drivers of AI Effectiveness, this lands most strongly under Driver #6: Mentality, meaning when and how naturally people bring AI into their work.

When AI becomes an always-on collaborator, work expands unless teams reset habits and norms. In practice, this also touches quality assurance and continuity, because rework and boundary erosion are sustainability issues, not just workflow issues.

Implications for Leaders and L&D

  • If you measure productivity only by output, you may accidentally reward scope drift, fragmentation, and rework-heavy patterns.

  • “AI training” that teaches prompting but ignores work design (role clarity, sequencing, verification) can accelerate overload.

  • The fastest practical win is an AI practice: simple team agreements that protect decision quality, boundaries, and attention.

Try This This Week

  • Pick one workflow (reports, proposals, decks) and define a sequencing rule: when drafts are created, reviewed, and closed.

  • Add one intentional pause to a recurring decision: one counterargument, one explicit link to the goal.

  • Track one intensity signal for two weeks: after-hours messages, meeting timing drift, or a simple “ping rate” proxy.

Ending thought:

AI productivity is real. So is AI work intensification.

The difference is whether leaders treat AI as a tool people use, or a practice teams operate.

If you want a structured way to diagnose whether AI is creating true productivity or hidden intensity, Radiant Institute can share a lightweight 7 Drivers of AI Effectiveness scorecard your leaders and L&D team can run with any function.

Maverick Foo

Maverick Foo

AI Enablement Strategist for L&D

We help companies to Work Faster, Think Sharper & Learn Smarter with AI 🤖 AI-Infused Training Programs 🏅Award-Winning Consultant & Trainer 🎙️3X TEDx Keynote Speaker & Panel Moderator ☕️ Cafe Hopper 🐕 Stray Lover 🐈

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Share this
Send this to a friend