top of page

Governing AI Agents: Decision Admissibility

What access control misses, and why your compliance investment just became strategic

By Raimund Laqua, P.Eng., PMP — Lean Compliance Consulting, Inc.



Imagine your organization deploys an AI agent to process vendor invoices. It has permission to read the invoice system, check against contracts, flag anomalies, and submit approved payments below a threshold. The deployment is described as "governed" — the agent has defined access, risk-tiered autonomy, and a human-in-the-loop for high-value transactions.


Six months later, the finance team discovers the agent has been quietly paying invoices that are technically within policy but don't match the actual goods received. The invoices passed every permission check. Every access rule. And yet what the agent produced is not what the organization intended.


How did this happen? And why is the problem so hard to fix by adding more rules to the access model?


This is the governance question that agentic AI is forcing organizations to confront. One answer being proposed in the AI governance space is a concept called decision admissibility — the idea that we can govern autonomous agents by evaluating whether a proposed action could reach outcomes that are not permitted. If the outcomes that follow from the action are inadmissible, execution is refused. Proponents describe this as something stronger than authorization, because it considers not just whether the action is permitted but whether the outcomes it could produce are.


It's a concept worth taking seriously. But the more I've tried to understand what it means, the more I've come to think the approach is missing something fundamental.


Three Meanings of Admissibility


When I read through what different people mean by decision admissibility, I find three distinct concepts wearing the same label.


  1. The first is runtime evaluation — at the moment an action is proposed, does the set of outcomes it could reach include any that are inadmissible? If so, refuse it. This is the reachability-based version. It's meant to be stronger than access control because it evaluates outcomes, not just permissions. Whether it actually works depends on whether you can map reachable outcomes for a stochastic system, which is where the approach runs into difficulty.

  2. The second is design-time constraint — have we built the system so that inadmissible outcomes are structurally unreachable in the first place? If the agent has no access to financial systems, it cannot produce financial outcomes regardless of what it proposes. This is capability shaping, a concept well established in engineering and heavily regulated industries.

  3. The third is philosophical — admissibility as an ontological condition for what is permitted to exist within the system. This draws on a framework treating admissibility as a pre-theoretical property governing whether systems may form and persist. Coherent as philosophy, but it has not been translated into practical governance architecture.


Of the three, the first is the ambitious claim — the one that would actually represent something new. The second is existing practice under new vocabulary. The third hasn't become practical yet.


When Roles Don't Apply


In practice, organizations rarely implement runtime reachability evaluation — it's too hard. What they actually do is extend role-based access control to agents and call it governance. This is where a second problem shows up.


Role-based access control works for humans because roles do more than assign permissions. A role is a form of capability shaping. When an organization defines a role — finance analyst, compliance officer, procurement manager — it specifies the work, the authority, the competencies, the accountability. The permissions attached to the role are the enforcement mechanism. The role itself is the governance artifact.


Many agents don't fit this pattern. The invoice-processing agent in the earlier example doesn't occupy an existing role. No one designed a job description called "invoice processor" with a defined stratum, accountable manager, and professional competencies. The agent was given permissions pulled from existing roles — without the role context that would have shaped how those permissions are used.


This is the real problem with extending role-based permissions to agents. The role is what's shaping the capability. When the agent doesn't fit an existing role, the permissions model has nothing to anchor to, because the capability shaping that roles normally provide has been skipped entirely.


The Judgment Problem


Set aside the practical problems with extending permissions for a moment and consider the reachability-based version of admissibility on its own terms. Even there, a fundamental issue remains.


Evaluating whether a proposed action could lead to inadmissible outcomes requires judgment. Not a rule match. Not a permissions check. Judgment — weighing context, purpose, consequences, and circumstances that have never occurred in exactly this combination before. In the invoice example, the invoices passed every rule. But a finance professional reviewing them would have noticed the pattern didn't fit how the organization actually operates. That's judgment about where the situation was heading, and it can't be reduced to a lookup table without losing what makes it judgment.


The gate is meant to put a human back in the loop — to preserve human judgment at the moment that matters most, when a proposed action would commit. That intent is right.


But the gate places decision-making at the latest possible point of intervention. At machine speed, across the volume of decisions agents produce, the human at the gate is either overwhelmed, reduced to rubber-stamping, or forced to fall back on pre-defined rules that substitute for judgment. Placing intervention at the last moment before commitment concentrates the hardest part of decision-making at the point where there is the least time and context to exercise judgment well.


And evaluating reachable outcomes in a stochastic system is exactly where judgment is most needed and hardest to apply. The reachable space isn't enumerable. Novel situations emerge from the interactions of inputs the designers never anticipated. The gate is being asked to do what no rule set can — evaluate possible futures the system itself cannot fully predict.


The Work Comes First


There is something else missing from the admissibility discussion, and it matters more than the technical limitations.


Governance begins with the work. What is the work that needs to be done? What is its nature, its complexity, its time horizon? What stratum does it belong to? What judgment does it require? You define the work first, then the role — the accountability, authority, and capability needed to do that work. The role serves the work.


Much of the current AI governance discourse has inverted this. It starts with the agent's permissions — what systems can it access, what actions can it perform — rather than with the work the agent is being asked to do. The agent gets a permissions profile, not a work definition.


An agent may have permission to create a spreadsheet. But should it? The work isn't "create a spreadsheet." The work is "analyze this data to support a decision about X." That work has a purpose, a context, a required level of judgment, an accountable manager, and consequences if done poorly.


Output-oriented delegation is not new. Organizations have always delegated by specifying desired outcomes. What is different with agentic AI is that the governance of the work has been stripped away. When a human receives a delegation, they operate within an ecosystem of SOPs, professional standards, budgets, timelines, and accountability for method. When an agent receives it, that ecosystem is largely absent. The delegation pattern is the same. What's missing is everything that used to govern the journey.


The Compliance Reversal


Here is what most AI governance discourse gets wrong: it assumes we need an entirely new governance apparatus. We don't.


For decades, compliance documentation has been treated as overhead — a cost of doing business producing no operational value. Organizations have tried to minimize it, consolidate it, or tolerate it. Executives have questioned the investment.


Agentic AI changes that calculation. The substance of those procedures, standards, and policies — the accumulated institutional knowledge about how work should be done, what constraints matter, what outcomes are expected — is exactly what's needed to govern machine agents. An SOP written for a human to read doesn't constrain an agent's behaviour, but the substance of that SOP is exactly what an agent needs as its constitutional architecture. The documentation isn't overhead anymore. It's the training material. It's the regulatory DNA.


Organizations that invested seriously in operational compliance — that kept their SOPs current, their standards well-defined, their policies genuinely operationalized — are in a materially better position than those that treated compliance as a paper exercise. The investment that was hardest to justify becomes the asset that's hardest to replicate.


Where to Start


For governance, risk, and compliance professionals facing agentic AI deployment, there are four practical starting points.


  • Inventory your agents against existing roles. For each agent running or planned, ask: does this agent occupy a defined role? If not, what is the work being delegated to it, at what stratum, with what accountability? Agents without role anchors are agents without governance — no matter how many permissions they have.

  • Audit your compliance documentation for machine readability. Your SOPs, standards, and policies are about to become the constitutional architecture for AI governance. Which of them could a machine agent consume? Which exist only as unstructured narrative? Which haven't been updated in years? The gap between "documented" and "machine-consumable" is the gap between having compliance and having AI governance.

  • Define the work before defining the agent. Before deploying any new agent, require a work definition: what is being delegated, to what stratum, under whose accountability, against what obligations. If the work definition cannot be written clearly, the agent should not be deployed.

  • Treat capability expansion as a governance event. Every new tool, integration, or data source an agent acquires is an expansion of the capability envelope. It is not an operational decision — it is a governance decision that requires recharacterization, revalidation, and managerial accountability.


None of this requires a new paradigm. The organizational disciplines that govern human work — defining the work, anchoring it in roles, establishing accountability, operationalizing standards — are the same disciplines that will govern machine work. Governing AI agents isn't something separate from what compliance professionals already do. It's what compliance professionals already do, extended to a new class of agent. Permissions alone won't govern. But the rest of what your organization has already built can.



Raimund Laqua is the Founder and Principal Engineer at Lean Compliance Consulting, Inc., where he develops the Operational Compliance Model (OCM) and the Organizational Compliance Framework (OCF) — governance tools for the Intelligence Age. He can be reached at rlaqua@leancompliance.ca.

© 2026 Lean Compliance Consulting, Inc. — All rights reserved.

Can your compliance keep you between the lines, ahead of risk, and on mission?

The Compliance Capability Assessment gives you an honest picture of where your program stands — and a strategic conversation about what to do next.

bottom of page