Frameworks
Reusable models, checklists, and review structures for evaluating trust boundaries, verification claims, and system behavior under scrutiny.
How to Design a Recoverable Failure Path for Governed Operations
A recoverable failure path is not error handling — it is the governance architecture for what happens when an operation fails. This framework gives the structure for designing it as a first-class artifact.
How to Scope a Governed AI Engagement
A structured method for determining whether an AI system problem is reviewable, what inputs are needed, and what a scoped engagement should produce.
How to Test Whether a Proof Surface Is Actually Independent
Independence is testable. This framework gives a structured method for determining whether a proof surface meets the independence threshold — what to examine, what artifacts to check, and what a passing result actually requires.
How to Review a System for Trust Boundaries
A structured checklist for evaluating where trust sits in a system, what is assumed, and where claims break down under scrutiny.
How to Evaluate an AI Agent System for Production Readiness
A structured checklist for evaluating whether an AI agent system is ready for production use. Anchored to authority boundaries, scope enforcement, policy gates, evidence completeness, replayability, and independent verification.