Part 4: Transformation

Chapter 23: Governance for Autonomous Systems

As systems gain autonomy, governance becomes critical. Who's accountable when an agent makes a decision? How do you ensure it's operating appropriately?

The Governance Gap

Traditional IT governance assumes humans make decisions and systems execute them. Agentic AI blurs this line — systems now make decisions too.

Principles for Agentic AI Governance

  1. Human accountability: Every agent must have a human owner accountable for its behaviour
  2. Transparency: It should be possible to understand what an agent did and why
  3. Bounded autonomy: Agents should have clear boundaries — what they can and cannot do
  4. Proportionate oversight: Higher-risk agents need more oversight. Match oversight to consequence
  5. Continuous assurance: Governance isn't a one-time gate — it's ongoing

Aligning with Existing Frameworks

Agentic AI governance should integrate with your broader governance structures.

The Pragmatix Digital Transformation Framework (pX-DTF), grounded in ISO/IEC 38500, provides a foundation:

The pX-DTF also aligns with ISO/IEC 42001:2023 for AI Management Systems, providing specific guidance for governing AI-powered capabilities.

Key Insight

Governance isn't bureaucracy — it's what makes autonomous systems trustworthy. The right governance enables more autonomy, not less, because it provides the assurance needed to extend trust.

☰ Contents