The GenAI Divide Is a Resourcefulness Problem, and Here Is What the Data Shows

Maverick Foo
Tuesday, 12th May 2026

MIT NANDA’s The GenAI Divide: State of AI in Business 2025 studied over 300 AI initiatives, conducted 52 structured interviews, and surveyed 153 senior leaders across four industry conferences. Its headline finding:

95% of organizations are getting zero measurable return from GenAI investment.

That stat made the rounds online, usually compressed into “95% of AI projects fail.” The framing felt alarmist, and for a while, I set the report aside.

But a recent client conversation brought it back. “We have ChatGPT, Copilot, and two other AI tools. But honestly? Nothing has really changed.”

That sentence captured exactly what the report describes, and it forced me to look again.

The headline may be imprecise. The underlying diagnosis is not.

 

Three Findings That Hold Up Under Scrutiny

The report’s most viral stat deserves context. It measures a specific slice of enterprise deployments within a limited timeframe, not all AI investment everywhere. But strip away the headline, and the patterns underneath are well supported.

The Pilot-To-Production Gap

60% of enterprises evaluated AI. 20% piloted it. Only 5% reached production. The technology worked. The organizations did not. They could not figure out how to move from experiment to operating reality. That is a resourcefulness failure, not a technical one.

The Shadow AI Economy

Only 40% of companies have an official LLM subscription. But workers from 90% of companies already use personal AI tools for work. The most resourceful employees are not waiting for permission. They are solving real problems with AI through unofficial channels. Call it shadow IT if you prefer. I see shadow resourcefulness, hiding in plain sight.

The Learning Gap

Most GenAI systems do not retain feedback, adapt to context, or improve over time. The report calls this the learning gap. Resourceful organizations close it: they capture what works, feed it back in, and compound the learning. The rest start from scratch every time.

 

The Common Thread: Organizational Resourcefulness

All three patterns point to the same underlying issue. The divide between the 5% getting value and the 95% getting nothing runs deeper than AI strategy, tool selection, or budget.

It is about organizational resourcefulness: the ability to find what is already working inside your organization, make it safe, and scale it deliberately.

The employees using personal AI tools on their own time are the signal. They are showing you where the friction is, and what the organization has not equipped them to do officially.

Read that as a compliance issue if you prefer. For L&D and talent development leaders, it is better understood as an enablement gap waiting to be closed.

Implications for Leaders and L&D

The MIT NANDA report confirms something many practitioners already feel: the bottleneck in AI adoption is not technology. It is implementation, learning, and organizational design. For leaders responsible for building AI capability, three implications stand out.

 

  • Shift from tool training to workflow enablement. Generic AI literacy programs do not close the gap. Teams need structured support to embed AI into how work actually moves, not just how individuals prompt.
  • Surface and learn from shadow AI usage. Your employees are already experimenting. The question is whether leadership creates safe channels to capture those experiments, learn from them, and scale what works.
  • Build learning loops, not one-off rollouts. The report’s strongest thesis is that AI systems, and the organizations around them, must learn and improve over time. One-and-done training will not produce compounding results.

Try This This Week

  • Ask your team one question: “Where are you already using AI that we have not talked about?” The answer will surface the shadow resourcefulness in your organization and give you a starting point for structured enablement.
  • Pick one workflow where AI is being used informally and make it official. Document the process, add guardrails, share it with the wider team. That is how shadow resourcefulness becomes organizational capability.
  • Take the Team AI Effectiveness Scorecard. This week’s theme connects directly to the Scalability driver in the 7 Drivers of AI Effectiveness. Scalability measures how well AI ways of working spread beyond individuals into the wider team. If your Scalability score is low, it often means good AI practices are locked inside a few people’s heads instead of becoming shared habits. The Scorecard takes a few minutes and gives you a concrete baseline.

Ending thought:

The GenAI Divide is real, but it is not a technology story. It is a resourcefulness story. The organizations on the right side of the divide are not the ones with the biggest budgets or the best models. They are the ones that noticed what was already working, made it safe, and scaled it deliberately.

For leaders in L&D, talent development, and HR, the opportunity is clear: stop waiting for perfect conditions and start building from what your people are already doing. The signals are there. The question is whether your organization is resourceful enough to learn from them.

At Radiant Institute, we work with organizations across Malaysia and APAC to build structured AI enablement programs that close exactly this kind of gap. If this resonates, we would welcome a conversation about how to make your teams more resourceful with AI.

Maverick Foo

Maverick Foo

Lead Consultant, AI-Enabler, Sales & Marketing Strategist

Partnering with L&D & Training Professionals to Infuse AI into their People Development Initiatives 🏅Award-Winning Marketing Strategy Consultant & Trainer 🎙️2X TEDx Keynote Speaker ☕️ Cafe Hopper 🐕 Stray Lover 🐈

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Share this
Send this to a friend