NIST CSF 2.0 Evidence Mapping
Evidence-mapping template for showing how WitnessOps evidence and independent verification records support NIST CSF 2.0 functions and categories.
This page is an evidence-mapping template. It does not state that WitnessOps is compliant with any framework, law, or regulation. It helps teams map emitted artifacts and verification records to external requirements.
Shared trust boundary
- WitnessOps emits governed execution evidence such as receipts, manifests, approval-linked records, execution metadata, and preserved artifacts.
- Independent verification checks evidence such as signatures, integrity, continuity, and correspondence between declared scope and stored records.
- Neither product makes the external framework determination on its own. Control design, legal interpretation, policy ownership, and organizational accountability remain external.
Shared trust assumptions
Record any assumptions that apply before relying on this mapping:
- host integrity remains a trust assumption
- tool and adapter integrity remain trust assumptions
- signing key control and availability remain trust assumptions
- scope definitions, identity sources, and approval policy configuration remain trust assumptions
- some controls, reviews, and legal interpretations remain manual or organization-owned
Shared failure-state explanation
This mapping is only as strong as the governed evidence chain.
If approvals, scope records, receipts, manifests, or verification outputs are missing, inconsistent, or uncheckable, then the activity is not fully supported by the governed execution record. That does not prove the activity was invalid, but it does mean the auditor or reviewer cannot rely on this template alone to establish traceable governed execution.
Problem this page solves
NIST CSF 2.0 asks for outcomes at the function and category level. WitnessOps emits operational records. Teams need a repeatable way to map those records to CSF language without claiming that evidence capture alone proves control effectiveness or regulatory conformity.
Reader outcome
After completing this page, you should be able to:
- map a CSF function/category to concrete WitnessOps artifacts
- separate direct evidence from supporting context
- document what independent verification confirms
- record residual gaps, trust assumptions, and manual dependencies
- assemble a review packet without certification-style claims
Mechanism-first mapping method for NIST CSF 2.0 functions/categories
Treat each mapping row as one mechanism under one CSF category (for example: GV.RR, ID.AM, PR.PS, DE.CM, RS.AN, RC.RP).
- Name the CSF function and category being mapped.
- State the mechanism being examined (approval gate, governed execution path, receipt issuance, evidence preservation, verification run).
- Attach direct evidence emitted by that mechanism.
- Attach independent verification output that tests continuity, attribution, and consistency.
- Add supporting context needed to interpret scope, ownership, or policy intent.
- Record what remains manual, external, or unresolved.
- Link exact artifacts a reviewer can inspect without oral reconstruction.
| NIST function | Category | Mechanism under review | Direct evidence (required) | Supporting context (optional but useful) | Verification question | Residual gap / dependency |
|---|---|---|---|---|---|---|
| GV | GV.RR Roles/Responsibilities | Approval and authorization chain | approval records, identity-linked actions, execution receipts | HR role matrix, org chart, policy ownership notes | Do actor, approver, and action linkage remain intact end-to-end? | Enterprise role design and segregation-of-duty controls outside WitnessOps |
| ID | ID.AM Asset Management | Scope declaration and target binding | scope records, target identifiers, manifests, receipt-linked evidence references | CMDB extracts, asset owner registry, environment inventory | Do artifacts and execution events map to declared targets only? | CMDB completeness and ownership accuracy are external assumptions |
| PR | PR.PS Platform Security | Constrained execution path | runbook constraints, adapter metadata, governed execution logs, denials/exceptions | baseline hardening standards, infra control docs | Did execution stay within approved boundaries and recorded controls? | Underlying host hardening and infra posture may be external |
| DE | DE.CM Continuous Monitoring | Event and evidence continuity | event logs, state transitions, receipts, preserved artifacts | SIEM correlation rules, monitoring runbooks | Is the evidence chain complete and internally consistent? | Sensor coverage and enterprise telemetry completeness external |
| RS | RS.AN Incident Analysis | Observation and analysis traceability | timestamped observations, preserved artifacts, analyst notes linked to receipts | incident taxonomy, response procedures | Are conclusions traceable to recorded observations and source artifacts? | Analytical correctness still requires accountable human judgment |
| RC | RC.RP Recovery Planning | Closure and post-action review | closeout records, final receipts, approval history, post-action notes | continuity plans, recovery policy docs | Does closure evidence align with prior approvals and execution history? | Service restoration effectiveness may depend on external systems/processes |
Evidence classes and mapping logic (direct evidence vs supporting context)
Direct evidence is generated by execution and evidence systems and is expected for a supportable mapping claim:
- receipts, manifests, event logs, approval records, preserved artifact references, verification outputs
Supporting context helps interpretation but does not independently prove a CSF outcome:
- policy text, control narratives, CMDB exports, role catalogs, architecture diagrams, runbooks outside captured execution
Use this logic:
- If direct evidence is present and independently verifiable, the mapping can support a bounded claim about what occurred.
- If only supporting context is present, mark the row as contextual support, not evidentiary proof.
- If direct evidence and context conflict, preserve both and mark the mismatch for review.
Bounded claims: what this mapping can support vs what it cannot assert
What this mapping can support
- that specific governed actions occurred and were recorded
- that artifact-to-claim linkage is inspectable
- that independent verification confirmed integrity/continuity checks for the mapped records
- that explicit gaps and external dependencies were documented
What this mapping cannot assert
- legal or regulatory conformity determinations
- certification, attestation, or audit-pass status
- organization-wide control effectiveness
- adequacy of governance decisions made outside captured evidence
Observed vs inferred separation
Keep each row split between facts and interpretation:
- Observed: data directly present in receipts, logs, manifests, approvals, and preserved artifacts.
- Inferred: conclusions about effectiveness, sufficiency, maturity, or risk reduction.
Only observed data is directly verifiable from system records. Inferred conclusions require accountable human judgment.
Trust assumptions and limits
Record assumptions explicitly in each mapping packet, including:
- identity provider correctness and access-control integrity outside this mapping
- host/platform integrity where evidence is generated and stored
- key-management and signing control integrity
- retention and archival behavior in external systems
- completeness/correctness of upstream systems (CMDB, HRIS, ticketing, SIEM)
- legal interpretation and governance decisions owned by the organization
If an assumption is unverified in the packet, treat it as a stated limit.
Next-page handoff to /docs/evidence-mapping/dora
Continue to DORA Evidence Mapping to apply the same mechanism-first method to DORA-specific obligations and boundaries.