DORA Evidence Mapping
Evidence-mapping template for showing how WitnessOps evidence and independent verification records may support DORA-oriented obligations.
This page is an evidence-mapping template. It does not state that WitnessOps is compliant with any framework, law, or regulation. It helps teams map emitted artifacts and verification records to external requirements.
Shared trust boundary
- WitnessOps emits governed execution evidence such as receipts, manifests, approval-linked records, execution metadata, and preserved artifacts.
- Independent verification checks evidence such as signatures, integrity, continuity, and correspondence between declared scope and stored records.
- Neither product makes the external framework determination on its own. Control design, legal interpretation, policy ownership, and organizational accountability remain external.
Shared trust assumptions
Record any assumptions that apply before relying on this mapping:
- host integrity remains a trust assumption
- tool and adapter integrity remain trust assumptions
- signing key control and availability remain trust assumptions
- scope definitions, identity sources, and approval policy configuration remain trust assumptions
- some controls, reviews, and legal interpretations remain manual or organization-owned
Shared failure-state explanation
This mapping is only as strong as the governed evidence chain.
If approvals, scope records, receipts, manifests, or verification outputs are missing, inconsistent, or uncheckable, then the activity is not fully supported by the governed execution record. That does not prove the activity was invalid, but it does mean the auditor or reviewer cannot rely on this template alone to establish traceable governed execution.
Problem this page solves
DORA asks entities to demonstrate operational resilience outcomes across governance, incident handling, testing, and third-party oversight. WitnessOps emits operational records, not legal determinations. Teams need a repeatable way to map evidence to DORA-oriented obligations while explicitly documenting boundaries, assumptions, and unresolved gaps.
Reader outcome
After completing this page, you should be able to:
- map a DORA-oriented obligation area to concrete WitnessOps artifacts
- separate direct evidence from supporting context
- document what independent verification can confirm
- record residual gaps, trust assumptions, and manual dependencies
- assemble a review packet without certification-style claims
Mechanism-first DORA mapping model
Treat each mapping row as one mechanism under one DORA-oriented obligation area (for example: ICT risk governance traceability, incident evidence continuity, resilience testing evidence, third-party interaction controls, or accountability records).
- Name the obligation area and mechanism being examined.
- Attach direct evidence emitted by governed execution.
- Attach independent verification output for continuity, attribution, and consistency.
- Add supporting context needed for scope and policy interpretation.
- Record what remains manual, external, legal, or unresolved.
- Link exact artifacts a reviewer can inspect without oral reconstruction.
| DORA-oriented obligation area | Mechanism under review | Direct evidence (required) | Supporting context (optional) | Verification question | Residual gap / dependency |
|---|---|---|---|---|---|
| ICT risk governance traceability | Approval and constrained execution path | approvals, identity-linked actions, runbook constraints, execution receipts | policy ownership notes, control narratives, committee references | Do approvals, actors, and executed actions remain attributable end-to-end? | Enterprise risk framework design and board accountability are external |
| ICT systems and control governance support | Declared tool/workflow boundary enforcement | adapter metadata, allow/deny events, governed workflow logs, exception records | infrastructure standards, platform architecture notes | Did execution stay inside declared and approved control boundaries? | Baseline infrastructure hardening and tooling governance may live outside WitnessOps |
| ICT incident evidence readiness support | Incident observation and escalation chain | timestamped observations, preserved artifacts, escalation records, receipt-linked timelines | incident taxonomy, notification playbooks | Is incident evidence complete, attributable, and linked to declared scope? | Legal classification and authority reporting remain organization-owned |
| Digital operational resilience testing support | Authorized test execution and evidence preservation | test scope records, authorizations, execution receipts, artifact manifests | test program cadence, sampling rationale | Do recorded test actions and artifacts match approved scope and declared steps? | Test adequacy and regulatory interpretation are external judgments |
| Third-party ICT dependency oversight support | Controlled interaction with external systems | scoped target records, approval history, interaction logs, evidence bundles | vendor inventory, contract obligations | Do third-party interactions match declared scope and preserved evidence? | Vendor governance, concentration-risk analysis, and contract enforcement are external |
| Auditability and accountability records | Closure and reviewer-ready evidence packet | closeout records, manifests, receipts, verification outputs | retention policy references, archive procedures | Can a reviewer reconstruct what happened from artifacts alone? | Long-term retention and legal-hold controls may depend on external systems |
This is a mapping aid, not a DORA conformity decision or certification instrument.
Evidence classes and mapping logic (direct evidence vs supporting context)
Direct evidence is generated by execution and evidence systems and is required for bounded mapping claims:
- receipts, manifests, event logs, approvals, preserved artifact references, verification outputs
Supporting context helps interpretation but does not independently prove fulfillment of a DORA-oriented obligation:
- policy documents, governance narratives, architecture diagrams, vendor registers, meeting records
Use this logic:
- if direct evidence is present and independently verifiable, the row can support a bounded claim about recorded activity
- if only supporting context is present, mark the row as contextual support, not evidentiary proof
- if direct evidence and context conflict, preserve both and mark the mismatch for review
Bounded claims (what mapping can support vs cannot assert)
What this mapping can support
- that specific governed actions occurred and were recorded
- that artifact-to-claim linkage is inspectable
- that independent verification confirmed integrity/continuity checks on mapped records
- that explicit gaps and external dependencies were documented
What this mapping cannot assert
- legal interpretation of DORA articles or applicability outcomes
- regulatory conformity, certification, attestation, or audit-pass status
- organization-wide control effectiveness beyond captured evidence
- sufficiency of governance or risk decisions made outside recorded mechanisms
Do not present this mapping as legal advice.
Observed vs inferred separation
Keep each row split between facts and interpretation:
- Observed: facts directly present in receipts, logs, manifests, approvals, and preserved artifacts.
- Inferred: conclusions about effectiveness, sufficiency, resilience posture, or legal adequacy.
Only observed data is directly verifiable from system records. Inferred conclusions require accountable human judgment.
Trust assumptions and limits
Record assumptions explicitly in each mapping packet, including:
- identity and access-control integrity outside this mapping
- host/platform integrity where evidence is generated and stored
- key-management and signing-control integrity
- retention and archival behavior in external systems
- completeness/correctness of upstream systems (CMDB, ticketing, vendor records, SIEM)
- legal interpretation and governance decisions owned by the organization
If an assumption is unverified in the packet, treat it as a stated limit.
Next-page handoff to /docs/evidence-mapping/eu-ai-act
Continue to EU AI Act Evidence Mapping to apply the same bounded, mechanism-first method to AI Act-oriented obligations and boundaries.