Reviews
Architecture critiques, trust-boundary reviews, and assessments of how system claims hold up under scrutiny.
How AI Agent Systems Treat Governance Failures as Implementation Details
When an AI agent violates scope, misses a policy gate, or produces no receipt, most platforms treat it as a bug to fix — not a designed failure path. The missing recovery architecture is the governance gap.
How Zero-Trust Marketing Obscures Trust Boundaries
Zero-trust as a marketing label presents the right vocabulary while obscuring where trust boundaries actually sit. The label does not prove the architecture.
Why Third-Party Verifiers Still Fail When the Evidence Path Is Controlled
A third-party verifier that reads only what the system under review provides is not an independent check. Independence of the verifier is necessary but not sufficient — the evidence path must also be independent.
Why Most AI Workflow Demos Are Hard to Trust
AI workflow demos routinely conflate capability with governance. This review examines the trust-boundary problems that make most demos unreliable as evidence of production readiness.
How RBAC Fails in Multi-Tenant AI Platforms
Standard role-based access control assumes static role boundaries. AI agent platforms break those assumptions when agents act across tenant contexts, escalate through tool-calling, or inherit ambient permissions.
Why Compliance Dashboards Are Not Security Evidence
Compliance dashboards present internal state as if it were independently verifiable. They are presentation, not proof. Trace the evidence chain and it ends inside the system.
What Breaks When Agents Call External APIs
The trust boundary between an AI agent and an external API is wider than most architectures acknowledge. Failure modes include scope leakage, credential inheritance, response manipulation, and unverifiable execution.