April 8, 2026
WitnessOps

How RBAC Fails in Multi-Tenant AI Platforms

Standard role-based access control assumes static role boundaries. AI agent platforms break those assumptions when agents act across tenant contexts, escalate through tool-calling, or inherit ambient permissions.

The Pattern

RBAC is the standard access-control model, and it works well for human users with predictable request patterns. A human assigned a "read-only analyst" role will make read requests. They know what they need. Their requests are bounded by intent.

AI agents break the key assumption: that a role holder makes bounded, predictable requests within their role's scope. An agent assigned the same read-only analyst role may systematically enumerate every accessible resource, chain tool calls that cross permission boundaries, and carry context across tenant sessions in ways the role model never anticipated. RBAC was designed for principals with motivation and cognitive limits. Agents have neither.


What Looks Strong

At demo time, this looks rigorous. The roles exist. The policies are documented. The logs show who accessed what. A compliance reviewer checking the structure will find what they expect to find.


Where the Trust Boundary Is Actually Weak

1. Agents probe systematically, not selectively. A human analyst requests what they need. An agent with tool-calling capability can enumerate all resources accessible under its role — not because it was asked to, but because a subtask requires it. Role scope was designed to limit intent, not computational thoroughness.

2. Tool-calling chains can cross role boundaries. If any tool in a multi-step chain carries broader permissions than the initiating role, those permissions are exercised mid-chain. The role boundary was checked at invocation, not at each step. The chain inherits the most permissive link.

3. Cross-tenant context leakage through shared infrastructure. Multi-tenant platforms often allow agents to cache embeddings, share retrieval indices, or reuse session context in ways that were not anticipated by the per-tenant isolation model. The role says "tenant A only." The embedding index doesn't know that.

4. Role escalation through prompt injection. An agent operating under a bounded role can be instructed via input to "be more helpful," "check if you have access to this," or to act on behalf of a user who claims elevated permissions. The role assignment happens at session start. The prompt arrives later and is not re-evaluated against the role.


What a More Governable Version Would Need to Show


The Principle

A role boundary that can be crossed by a sufficiently creative prompt is not an access control — it is a default that the system hopes won't be tested.


If this looks familiar, reading more won’t fix it → /review


See also: How to Review a System for Trust Boundaries — the framework for systematically surfacing where control actually sits.