
Xabier Muruaga
AI Architect · AI Security & Agentic Systems
Case Studies
Representative examples focused on artifacts and acceptance criteria.
About Me
Roles, credentials, and the operating context where this work was applied.
What I design
Production AI architecture that operates reliably and securely at scale.
I design enterprise AI systems where security, governance, and assurance are architectural decisions, built into reference architectures, control planes, and release gates from day one — not layers added after deployment.
In large organizations, AI stalls for predictable reasons. Prototypes move quickly but break at scale when security boundaries are unclear, controls fragment across teams, and audit evidence has to be recreated on every deployment. I make scale deliberate by defining the architecture before the first line of code, with threat models, enforcement points, and acceptance criteria that teams can actually follow.
A major part of the work is security: threat modeling against OWASP LLM and agentic AI guidance, MITRE ATLAS, CSA MAESTRO, and NIST AI RMF; enforcing retrieval boundaries, tool permissions, and identity-scoped execution; building evaluation gates and assurance packages that satisfy audit and regulatory requirements including the EU AI Act.
When systems are already live, I assess them through adversarial testing and assurance reviews, delivering severity-rated findings, remediation criteria, and audit-ready evidence.
The case studies reflect this lifecycle: production AI architecture, AI security architecture, agentic AI controls, independent assurance assessment, compliance readiness, and value discovery.