All Case Studies
AI Governance
Case Study

EU AI Act & ISO/IEC 42001 Readiness

Operationalize readiness for the EU AI Act and ISO/IEC 42001 through an AI Management System with policy-as-code enforcement, distributed ownership, and continuous evidence in production — not periodic compliance exercises.

Public reference: Iberdrola certifies its AI Management System with AENOR
EU AI Act & ISO/IEC 42001 Readiness

Executive Outcome

01

Versioned, testable policies that teams validate early in delivery — reducing late-stage surprises and waiver sprawl.

02

Consistent runtime enforcement through declared policy enforcement points, preventing unapproved traffic from reaching models.

03

Audit-ready execution where evidence is produced automatically as a byproduct of enforcement and tracing, not reconstructed before reviews.

Engagement focus

AIMS implementation and runtime governance for EU AI Act and ISO/IEC 42001 in a federated delivery landscape — policy-as-code, distributed ownership, and continuous evidence.

What this covers
  • AI Management System (AIMS) with scope, ownership, and control taxonomy
  • Policy-as-code enforcement with versioned definitions and exception handling
  • Continuous evidence generation for audit sampling and incident analysis

Context

In a regulated, federated organization, AI delivery velocity was outpacing the capacity of manual reviews and checklist-based governance. The objective was to shift governance into the runtime so control enforcement and evidence generation become systematic, consistent, and repeatable across teams and providers. ISO/IEC 42001 frames this as an AI Management System (AIMS) with defined scope, ownership, and continuous improvement. The practical challenge was making this work across multiple delivery teams, cloud providers, and model vendors — each with different tooling, release cadences, and risk profiles — without creating a central bottleneck.

The Challenge

  • 01Manual reviews did not scale to the pace of AI delivery, creating bottlenecks and inconsistent outcomes across teams.
  • 02Policy enforcement varied across teams and providers — uneven controls, inconsistent exception handling, and control drift over time.
  • 03Audit trails reconstructed from disjointed logs, creating evidence gaps and high operational overhead before every review.
  • 04Decentralized model access increased shadow AI risk and reduced visibility into usage, data exposure, and tool permissions.
  • 05No shared language for control applicability, exception handling, or evidence requirements across the federated delivery landscape.

Approach

  • AIMS scope, organizational context, and leadership commitments translated into an actionable control taxonomy and operating model.
  • Governance-as-code: versioned policy definitions, control taxonomy, and clear ownership for policy lifecycle management.
  • Declared policy enforcement points for model and tool traffic with standardized exception handling and escalation paths.
  • Policy-stamped request envelope binding request context, applicable policy version, enforcement decisions, and key signals into a traceable record.
  • Control evidence model linking each control to enforcement signals, decision records, and sampling-ready artifacts.
  • Evidence retention patterns: enforcement decisions and runtime signals captured in immutable, joinable form for audit sampling and incident analysis.
  • Reporting and sampling views mapping obligations to controls, evidence sources, and exception handling for EU AI Act and ISO/IEC 42001.
  • Distributed ownership model: central team owns policy definitions and enforcement infrastructure; delivery teams own local implementation and exception documentation.

Key Considerations

  • Policy discipline requires upfront alignment with application teams and clear change management to avoid friction and bypass patterns.
  • Shared enforcement layer becomes a critical service — requires reliability, latency, and availability expectations.
  • Policy authoring and maintenance require dedicated capability, review practices, and controlled rollout to prevent drift.
  • Distributed ownership means central team cannot enforce everything directly — influence, defaults, and exception governance matter more than mandates.

Alternatives Considered

  • Manual approval gates: rejected — non-scalable and prone to inconsistent outcomes under volume.
  • Library-based controls: rejected — can be bypassed, drift across implementations, and fail to provide central evidence.
  • Central mandate without adoption support: rejected — generates resistance and shadow workarounds in a federated organization.
Representative Artifacts
01AIMS Scope and Context Statement (boundaries, roles, decision rights)
02Control Taxonomy Mapping (EU AI Act, ISO/IEC 42001)
03Statement of Applicability (control register with applicability, status, evidence sources)
04Policy Repository Structure and Taxonomy
05Policy-Stamped Request Envelope Specification
06Evidence Retention and Audit Record Model
07Exception Management and Waiver Workflow
08Compliance Reporting and Sampling Dashboard
09Distributed Ownership Model (central vs. delivery team responsibilities)
Acceptance Criteria

Policy enforcement applied consistently to production model and tool traffic through declared enforcement points.

Policy changes versioned, reviewable, and promotable through defined release discipline.

Blocked or flagged requests generate complete enforcement records suitable for audit sampling.

Developers receive actionable feedback on policy violations in the delivery workflow.

AIMS scope, ownership, and control applicability documented, versioned, and linked to evidence sources.

Exception paths governed with documented rationale, ownership, and review cadence.