Evidence Mapping

EU AI Act Evidence Mapping

Evidence-mapping template for showing how WitnessOps evidence and independent verification records may support selected EU AI Act obligations.

Evidence-Mapping Template Only

This page is an evidence-mapping template. It does not state that WitnessOps is compliant with any framework, law, or regulation. It helps teams map emitted artifacts and verification records to external requirements.

Shared trust boundary

  • WitnessOps emits governed execution evidence such as receipts, manifests, approval-linked records, execution metadata, and preserved artifacts.
  • Independent verification checks evidence such as signatures, integrity, continuity, and correspondence between declared scope and stored records.
  • Neither product makes the external framework determination on its own. Control design, legal interpretation, policy ownership, and organizational accountability remain external.

Shared trust assumptions

Record any assumptions that apply before relying on this mapping:

  • host integrity remains a trust assumption
  • tool and adapter integrity remain trust assumptions
  • signing key control and availability remain trust assumptions
  • scope definitions, identity sources, and approval policy configuration remain trust assumptions
  • some controls, reviews, and legal interpretations remain manual or organization-owned

Shared failure-state explanation

This mapping is only as strong as the governed evidence chain.

If approvals, scope records, receipts, manifests, or verification outputs are missing, inconsistent, or uncheckable, then the activity is not fully supported by the governed execution record. That does not prove the activity was invalid, but it does mean the auditor or reviewer cannot rely on this template alone to establish traceable governed execution.

Problem this page solves

EU AI Act obligations are role-dependent, use-case-dependent, and frequently interpreted across legal, governance, and engineering teams. Organizations often have operational records but no bounded method to map those records to selected EU AI Act obligations without overclaiming. This page provides that bounded mapping method.

Reader outcome

After completing this page, you should be able to:

  • map selected EU AI Act obligation areas to concrete WitnessOps artifacts
  • separate direct evidence from supporting context
  • document what independent verification can confirm from recorded artifacts
  • separate observed facts from inferred conclusions
  • produce a reviewer-ready packet with explicit gaps and trust assumptions

Mechanism-first mapping model for selected EU AI Act obligations

Treat each row as one mechanism under one selected obligation area (for example: role determination support, risk management support, logging/record-keeping support, human oversight support, or post-market monitoring support).

  1. Name the selected obligation area and mechanism being examined.
  2. Attach direct evidence emitted by governed execution.
  3. Attach independent verification output for continuity, attribution, and consistency checks.
  4. Add supporting context needed for scope, role, and policy interpretation.
  5. Record what remains legal, manual, external, or unresolved.
  6. Link exact artifacts a reviewer can inspect without oral reconstruction.
Selected EU AI Act obligation areaMechanism under reviewDirect evidence (required)Supporting context (optional)Verification questionResidual gap / dependency
Role and applicability determination supportWorkflow scope declaration and role-tagged approvalsuse-case declarations, system-boundary metadata, role-linked approvals, execution receiptslegal role memos, governance charters, product classification notesDo executed workflows align with declared scope and role tags?Provider/deployer role determination, high-risk classification, and legal applicability are external judgments
Risk management process supportConstrained execution path with exceptionsapproval gates, deny events, exception records, risk-linked runbook constraints, receipt timelinesrisk methodology docs, committee minutes, escalation policyDid execution stay within approved boundaries and record deviations?Risk framework adequacy and legal sufficiency are external to evidence capture
Data and data-governance traceability supportInput/output provenance linkagesource artifact references, provenance metadata, transformation logs, evidence manifestsdata quality assessments, rights/licensing reviews, data governance policyAre mapped outputs traceable to declared inputs and recorded handling steps?Data representativeness, bias assessment, and lawful-data-use determinations are external
Technical documentation and record-keeping supportArtifact preservation and evidence continuityevent logs, manifests, receipts, change history, verification outputsmodel cards, technical file narratives, architecture diagramsCan a reviewer reconstruct what happened from preserved records alone?Formal technical-documentation completeness and retention-law interpretation are external
Human oversight supportHuman intervention, pause, override, and approval pointsintervention logs, pause/resume events, approval records, escalation recordsoversight procedures, staffing model, training recordsDid required human checkpoints occur where the mechanism declares they should?Oversight design adequacy and operator competency are not proven by logs alone
Accuracy/robustness/cybersecurity supportApproved test and monitoring evidence chaintest execution receipts, validation artifacts, anomaly observations, control-event logsacceptance criteria, benchmark plans, security test methodologiesAre test/monitoring observations attributable and linked to declared mechanisms?Scientific validity, robustness thresholds, and security certification remain external
Post-market monitoring and incident readiness supportObservation-to-escalation evidence pathtimestamped observations, incident records linked to receipts, escalation timelines, closeout recordsincident taxonomy, reporting playbooks, regulator communication proceduresIs there an inspectable chain from observation through response and closure?Legal reporting thresholds, timeliness determinations, and regulator obligations are external

This is a bounded evidence-mapping template for selected obligations. It is not legal advice and not a conformity assessment outcome.

Evidence classes and mapping logic (direct evidence vs supporting context)

Direct evidence is generated by execution and evidence systems and is required for bounded mapping claims:

  • receipts, manifests, event logs, approvals, preserved artifact references, verification outputs

Supporting context helps interpretation but does not independently prove obligation fulfillment:

  • policy narratives, governance documents, legal memos, architecture diagrams, process descriptions

Use this logic:

  • if direct evidence is present and independently verifiable, the row can support a bounded claim about recorded activity
  • if only supporting context is present, mark the row as contextual support, not evidentiary proof
  • if direct evidence and context conflict, preserve both and mark the mismatch for review

Bounded claims (what mapping can support vs cannot assert/certify)

What this mapping can support

  • that specific governed actions occurred and were recorded
  • that artifact-to-claim linkage is inspectable
  • that independent verification confirmed continuity/attribution checks on mapped records
  • that explicit gaps and external dependencies were documented

What this mapping cannot assert or certify

  • legal interpretation of EU AI Act applicability, exemptions, or role outcomes
  • high-risk classification decisions or conformity-assessment conclusions
  • CE marking, notified-body outcomes, certification, attestation, or audit-pass status
  • organization-wide safety, compliance, or control effectiveness beyond captured evidence

Do not present this mapping as legal advice or as a certification instrument.

Observed vs inferred separation

Keep each row split between facts and interpretation:

  • Observed: facts directly present in receipts, logs, manifests, approvals, and preserved artifacts.
  • Inferred: conclusions about sufficiency, effectiveness, safety posture, legal adequacy, or maturity.

Only observed data is directly verifiable from system records. Inferred conclusions require accountable human judgment.

Trust assumptions and limits

Record assumptions explicitly in each mapping packet, including:

  • identity and access-control integrity outside this mapping
  • host/platform integrity where evidence is generated and stored
  • key-management and signing-control integrity
  • completeness/correctness of upstream systems (asset inventories, ticketing, model registries, CMDB)
  • retention and archival behavior in external systems
  • legal interpretation, governance ownership, and role determination by the organization

If an assumption is unverified in the packet, treat it as a stated limit.

Next-page handoff to /docs/security-education

Continue to Security Education Scenarios for operator-focused attack-chain training that complements evidence mapping with practical response behavior.