Chapter 23: Governance for Autonomous Systems
As systems gain autonomy, governance becomes critical. Who's accountable when an agent makes a decision? How do you ensure it's operating appropriately?
The Governance Gap
Traditional IT governance assumes humans make decisions and systems execute them. Agentic AI blurs this line — systems now make decisions too.
Principles for Agentic AI Governance
- Human accountability: Every agent must have a human owner accountable for its behaviour
- Transparency: It should be possible to understand what an agent did and why
- Bounded autonomy: Agents should have clear boundaries — what they can and cannot do
- Proportionate oversight: Higher-risk agents need more oversight. Match oversight to consequence
- Continuous assurance: Governance isn't a one-time gate — it's ongoing
Aligning with Existing Frameworks
Agentic AI governance should integrate with your broader governance structures.
The Pragmatix Digital Transformation Framework (pX-DTF), grounded in ISO/IEC 38500, provides a foundation:
- Direct: Set the direction and boundaries
- Monitor: Observe agent performance and compliance
- Evaluate: Assess whether agents are delivering intended value
The pX-DTF also aligns with ISO/IEC 42001:2023 for AI Management Systems, providing specific guidance for governing AI-powered capabilities.
Governance isn't bureaucracy — it's what makes autonomous systems trustworthy. The right governance enables more autonomy, not less, because it provides the assurance needed to extend trust.
