AI Scenario Planning: Why Your Organization Needs to Plan for Four AI Futures

Maverick Foo
Tuesday, 28th April 2026

Most AI Strategies Are Built on One Assumption

Most organizations I work with have an AI strategy. It usually sounds like this:

AI is advancing fast. We need to adopt it or get left behind.

That is not a strategy. That is an assumption dressed as a plan.

It builds everything, the training, the governance, the investment, on top of one assumed trajectory. But things are moving fast. In the last 12 months alone, we have seen reasoning models, agentic AI, collapsing token costs, and entire product categories appear overnight.

And yet the OECD published a major report this year that should make every leadership team pause. The paper, Exploring Possible AI Trajectories Through 2030, did not predict one AI future. It mapped four.

 

What the OECD Report Actually Says

The report was developed using strategic foresight methods, trend analysis, and input from leading AI researchers across institutions in North America, South America, Asia, and Europe. It assessed AI capabilities across nine dimensions, including language, problem solving, creativity, metacognition, memory, and robotic intelligence.

Its core finding is striking: the evidence is too uncertain to rule out any of the four broad scenario classes. The plausible range by 2030 includes both a plateau at approximately today’s level of capabilities and rapid improvement that leads to AI systems which broadly surpass human capabilities.

The four scenarios are:

Scenario 1: Progress Stalls

AI capabilities remain largely unchanged from 2025. Diffusion of current tools continues, but no transformational leap occurs. Systems still rely on substantial human support for detailed prompting, review, and context. Issues of robustness and hallucinations continue to impact reliability.

Scenario 2: Progress Slows

Incremental gains deliver continued but slower progress. AI systems become markedly more capable than today, excelling at structured reasoning and acting as useful assistants for tasks that take humans hours or days. But they still rely on humans to scope tasks, review decisions, and provide guidance.

Scenario 3: Progress Continues

Rapid progress persists at roughly today’s pace. AI systems can perform many professional tasks in digital environments that might take humans a month to complete. They operate with high autonomy within set boundaries, but deficits in continual learning and real-world generalization persist.

Scenario 4: Progress Accelerates

Dramatic progress leads to AI systems as capable as, or more capable than, humans across most cognitive dimensions. Systems can autonomously work toward broad strategic goals, collaborate with humans, and revise their approach as circumstances change.

All four remain plausible. And the consulted experts expressed high uncertainty and low confidence in their ability to predict which one is coming.

 

Why This Matters for Leaders

The uncertainty is not a caveat buried in the appendix. It is the central finding. The OECD explicitly states that probabilities cannot be assigned to any of the scenarios.

Several variables drive this uncertainty:

  • Scaling laws that powered recent breakthroughs may or may not continue to deliver capability gains.
  • AI reasoning and memory remain deeply limited. Current systems still struggle with continual learning, metacognition, and solving novel problems outside their training data.
  • Energy and infrastructure may hit real bottlenecks. Data centers are predicted to account for 20% of the growth in electricity demand in advanced economies through 2030.
  • Governance and regulation frameworks are still being written, and their direction could speed or slow development significantly.

If your AI strategy only works in one of those futures, it is not resilient. It is fragile. And fragile strategies break at the worst possible time.

 

A Quick Diagnostic

Before your next leadership meeting, consider these questions:

  • Does your AI plan hold up if progress suddenly stalls and current tools are as good as it gets for years?
  • Does it still make sense if capability surges faster than expected and your competitors move first?
  • Have your leaders even discussed the possibility that there is more than one trajectory?

If the honest answer is “we have not had that conversation yet,” you are not behind. Most organizations have not. But the ones that do will have something the others will not: the ability to respond instead of react.

 

Readiness Beats Prediction

The OECD report is best understood not as a technology forecast, but as a readiness framework. Its value for leaders is threefold:

  1. It provides a credible, research-backed structure for discussing AI uncertainty without speculation.
  2. It supports strategic planning around governance, literacy, and workforce readiness.
  3. It creates a bridge between today’s tool-focused AI conversations and tomorrow’s strategic AI leadership conversations.

You cannot control what the frontier labs do. But you can control how prepared your organization is to absorb whatever comes. And readiness compounds. Start now, and you are not just ready for one future. You are ready for all four.

Implications for Leaders and L&D

  • Revisit your AI strategy assumptions. If your current plan is anchored on a single trajectory (usually “AI keeps advancing fast”), stress-test it against the three other OECD scenarios. Ask what breaks.
  • Build scenario thinking into leadership development. Scenario planning is not a one-time offsite exercise. It should become a recurring discipline, with quarterly light reviews and formal resets every six months.
  • Invest in capabilities that hold across all four futures. AI literacy, ethical governance, critical thinking, and adaptable workforce skills remain valuable whether AI stalls or accelerates. These are your no-regret moves.

Try This This Week

  • Run a 15-minute scenario test in your next leadership meeting. Pick the OECD scenario your team assumes is most likely, then ask: “What if we are wrong?” Note the gaps.
  • Audit your AI governance for resilience. Check whether your current policies and training plans would still make sense if AI progress slowed dramatically, or sped up far beyond expectations.
  • Take the Team AI Effectiveness Scorecard. Scenario planning ties directly to the Continuity driver in the 7 Drivers of AI Effectiveness, which measures how well work continues when AI tools, models, or conditions change. If your Continuity score is low, your strategy may be more brittle than you think.

Ending thought:

The OECD gave us something valuable: permission to stop pretending we know which AI future is coming, and a framework for planning without that certainty.

The strongest organizations will not be the ones that predicted correctly. They will be the ones that built readiness across multiple plausible futures, then adapted as signals emerged.

If your leadership team is ready to move from a single-bet AI strategy to a resilient, scenario-informed approach, Radiant Institute can help. Our AI enablement programs are designed to build exactly this kind of strategic readiness, grounded in practical frameworks, not hype. Reach out to explore how we can support your team.

Maverick Foo

Maverick Foo

Lead Consultant, AI-Enabler, Sales & Marketing Strategist

Partnering with L&D & Training Professionals to Infuse AI into their People Development Initiatives 🏅Award-Winning Marketing Strategy Consultant & Trainer 🎙️2X TEDx Keynote Speaker ☕️ Cafe Hopper 🐕 Stray Lover 🐈

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Share this
Send this to a friend