AI Agents at Work: Why Governance Matters More Than Speed

Maverick Foo
Saturday, 28th February 2026
AI agents are no longer a future-state concept. They are already showing up in workflows across sales, HR, operations, and customer service. According to Gravitee’s The State of AI Agent Security 2026 report, the average organization now manages 37 AI agents. And the conversation has shifted from “should we use them” to “how do we manage what they do.”

That shift matters. Because the risk with AI agents is less about wrong answers, and more about ungoverned actions.

 

The Intern Who Never Sleeps

Here is a useful way to think about what an AI agent actually is. If AI assistants help you with the work, AI agents help you do the work. They can take actions: pull data, update records, trigger workflows, and message people. They operate across systems, often without a human in the loop for every step.

Think of them as a new kind of digital team member. Eager, fast, and available around the clock. Now imagine you hired that person and forgot to give them a job scope, a manager to report to, or clear access limits.

They would not be malicious. They would just be unmanaged. And unmanaged agents create messy outcomes: the wrong client list sent to the wrong stakeholder, sensitive reports moved into the wrong folder, records updated that were never meant to be touched.

 

Three Signals from the Field

Gravitee’s The State of AI Agent Security 2026 report surveyed 919 executives and practitioners across industries including Financial Services, Telecommunications, Manufacturing, and Healthcare. The findings put numbers to what many leaders are quietly sensing.

Incidents are already the norm. 88% of organizations reported either confirmed or suspected AI agent security or privacy incidents in the past year. Only 12% had no reported incidents. This is not a future risk. It is a present reality.

Confidence is high, but oversight is partial. 82.0% of executives feel confident their policies can protect against misuse, yet only 47.1% of agents are actively monitored or secured. In practice, that gap means you might have rules on paper while half your agents operate without consistent supervision.

Shadow AI is already here. Only 14.4% of organizations have full IT and security approval across their entire agent fleet. Adoption is moving faster than standards can keep up, so different teams quietly develop different definitions of what “safe” looks like.

Agents are treated like tools, not workers with accountability. Only 21.9% of organizations treat agents as identity-bearing entities. Many agent-to-agent interactions rely on API keys (45.6%) or generic tokens (44.4%). When something changes in a system, it becomes difficult to trace who did what, and harder to enforce the principle of least privilege.

These are not IT problems. They are leadership and culture problems.

 

Badge. Boundaries. Backstop.

For leaders who are not in a technical role, here is a practical frame to apply when your organization starts expanding its use of AI agents.

Badge: Every agent needs a clear identity. What is it? What does it do? Who owns it?

Boundaries: Explicit permissions. What can it access, and what is out of scope? The default should be minimum necessary access, not maximum convenience.

Backstop: Monitoring and human checkpoints for high-risk actions. If an agent made a consequential change today, how quickly could you catch it, explain it, and stop it from happening again? The Gravitee report found that only 7.7% of organizations audit agent activity daily. The majority rely on monthly reviews. For systems that can execute hundreds of tasks per second, that is a significant blind spot.

This is not about slowing down adoption. It is about building the kind of AI infrastructure that scales without creating liability.

 

The Capability Shift Leaders Should Be Preparing For

Organizations that focus only on training people to use AI tools are solving half the problem. The more pressing need is training people to manage AI work: to set scope, review outputs, and hold accountability for what agents do on behalf of their teams.

One finding from the report makes this especially clear: 25.5% of deployed agents are already capable of both creating and instructing other agents, effectively establishing autonomous chains of command that can bypass human-centric review. That is not a distant scenario. It is happening in organizations right now.

The future is not just knowledge workers using AI. It is knowledge workers supervising AI, making judgment calls at the right moments, and building the systems that keep agents within safe and productive boundaries.

Implications for Leaders and L&D

  • Governance is a leadership skill, not just an IT responsibility. HR and L&D teams should be building the internal capability to define agent scope, ownership, and review cadence, not leaving it entirely to technical teams.
  • Shadow AI in agents is a culture signal. If teams are deploying agents without approval, it usually means the formal process is too slow or unclear. The fix is rarely more rules; it is better enablement and faster governance pathways.
  • Training programs need to evolve. Prompt literacy is a starting point, not the destination. The next wave of AI enablement should include how to brief, monitor, and course-correct AI agents as part of everyday management.

Try This This Week

  • Map one agent in your workflow. Pick an AI agent or automated process your team already uses and ask: does it have a clear owner, defined access limits, and a review checkpoint? If the answer to any of these is “not sure,” that is your starting point.
  • Use the Safety driver as a diagnostic lens. Safety, one of the 7 Drivers of AI Effectiveness, measures how consistently your team uses AI in ways that protect data and maintain trust. Ask yourself: do your agents meet the same standard you expect of your people? Run the Team AI Effectiveness Scorecard to get a baseline read on how your team is currently performing across Safety and the other six drivers.
  • Draft a one-page agent brief. For any agent your team is considering deploying, write down in plain language: what it does, what it can access, what it cannot, and who reviews its outputs. If you cannot write that brief in under ten minutes, the agent is not ready to deploy.

Ending thought:

AI agent security is not an awareness problem. Organizations understand the risks. What the Gravitee data shows is that the gap is in execution: consistent identity models, clear ownership, defined boundaries, and continuous monitoring are still the exception, not the norm.

For leaders and L&D teams, the opportunity is to build that governance capability before the scale of deployment makes it significantly harder to retrofit.

Badge your agents. Set their boundaries. Build your backstop. And make managing AI work a core competency, not an afterthought.

If you want to go deeper into the data behind this article, the full State of AI Agent Security 2026 report by Gravitee is worth reading. It covers identity gaps, authorization failures, and real incident stories from practitioners across industries. You can download it directly from the Gravitee website.

And if you want to start closer to home, run the Team AI Effectiveness Scorecard to see where your team currently stands on Safety and the other six drivers. It takes under 7 minutes and gives you a concrete starting point for the conversations that matter.

Maverick Foo

Maverick Foo

AI Enablement Strategist for L&D

We help companies to Work Faster, Think Sharper & Learn Smarter with AI 🤖 AI-Infused Training Programs 🏅Award-Winning Consultant & Trainer 🎙️3X TEDx Keynote Speaker & Panel Moderator ☕️ Cafe Hopper 🐕 Stray Lover 🐈

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Share this
Send this to a friend