Three hypothesis-tested strategic positions that define the ACAI worldview. Pattern-matched from deployments, regulatory examinations, and first-principles reasoning.
Making agents themselves compliant — governed, auditable, explainable, bounded. Every action traceable, authorised, and reproducible.
Agents performing compliance work autonomously — monitoring, alerting, reporting, and responding 24/7 across all regulatory domains.
Satisfying regulator expectations about AI systems themselves. Proving what your agents are and how they behave.
Like the DAMA wheel for data management and the ITIL service value system, ACAI structures agentic compliance as interconnected, peer-level domains radiating from a central governance core — with three rings representing operational execution, tactical management, and strategic governance.
Each domain specified across Operational (daily run), Tactical (monthly manage), and Strategic (quarterly/annual govern) rings. New agentic controls embed in what the organisation already runs for People, Process, and Technology — see the Must embed note inside every domain card below.
The ACAI architecture extends the governed semantic layer (consuming apps → semantic layer → data sources) with full agentic compliance infrastructure. Every layer is both a consumer of the layer below and a governed participant in the compliance ecosystem.
Agents are not software. They have goals, drift, emergent behaviour, and identity. ADLC extends traditional DevSecOps with agent-specific compliance gates at every phase — and formally closes the loop with decommissioning to prevent zombie agents.
Agents reduce cognitive load on high-volume, low-judgement tasks. Compliance officers focus on expertise, judgement, and regulator relationships.
Agents embedded in the IT/data layer act as continuous compliance monitors across infrastructure, databases, code pipelines, and cloud.
Agents at the output layer ensure everything leaving the compliance estate — reports, feeds, API responses — is governed and regulator-ready.
Complete traceability artefact — suitable for regulator examination, Board reporting, and audit evidence packs.
| Requirement | MAS FEAT + TRM | EU AI Act | NIST AI RMF | BNM RAI | ISO 42001 | ACAI |
|---|---|---|---|---|---|---|
| Fairness & Bias Testing | ✓ FEAT Fairness pillar — periodic bias audits, documented methodology | ✓ Art.10 — training data bias; Art.15 — accuracy across sub-groups | ✓ Measure — quantitative bias metrics with defined thresholds | ✓ Ethical AI — non-discriminatory outcomes in credit and AML | ✓ §6.1.2 — AI risk assessment includes bias and fairness criteria | D9 · D7 |
| Human Oversight | ✓ FEAT Accountability — senior manager accountable; escalation paths defined | ✓ Art.14 — mandatory for high-risk; intervention capability required | ✓ Govern — policies for human oversight; Manage — intervention | ✓ Board accountability — credit/AML decisions require human review capability | ✓ §8.4 — human oversight requirements for high-impact AI | D4 · D5 · D7 |
| Explainability | ✓ FEAT Transparency — decisions explainable to customers and regulators | ✓ Art.13 — transparency & information provision; Art.12 — logging | ✓ Map — contextual risk documentation; Measure — explainability metric | ⚠ Emerging — transparency expected for retail; guidance evolving | ✓ §9.1 — AI system performance monitoring including explainability | D2 · D3 · D7 |
| Immutable Audit Trail | ✓ FEAT Accountability + MAS TRM §9 — all AI decisions logged with detail | ✓ Art.12 — automatic logging throughout lifecycle; 6-month min retention | ✓ Manage — risk documentation; logging across full system lifecycle | ✓ RMIT — technology risk management requires comprehensive AI audit trails | ✓ §8.5 — logging and record-keeping requirements for AI systems | D2 |
| Model Risk Validation | ✓ FEAT Ethics + MAS MRM — validation before deployment; periodic re-validation | ✓ Art.9 — risk management system throughout lifecycle; conformity assessment | ✓ Measure — performance quantification; Map — risk ID; Manage — monitoring | ✓ BNM RMIT — independent validation for models in regulated decisions | ✓ §8.3 — AI system risk management including model validation | D4 · D7 |
| Agent Identity (NHI) | ✓ MAS TRM §6 — access controls; privileged access management; identity governance | ✓ Art.9 — access controls for AI systems; provider accountability | ✓ Govern — roles and responsibilities; Map — stakeholder identification | ✓ BNM RMIT §5 — technology access controls; IAM requirements for AI | ✓ §6.2 — roles and responsibilities; access controls for AI management | D1 |
| Incident Response (Art.62) | ✓ MAS TRM — serious AI incidents must be reported; post-incident review | ✓ Art.62 — serious incidents to national authority within 15 days | ✓ Manage — response and recovery; incident categorisation by risk level | ⚠ BNM TRM — technology incidents reported; AI-specific guidance emerging | ✓ §10.2 — nonconformity and corrective action; incident management | D6 · D7 |
| Third-party AI Vendor Risk | ✓ MAS Outsourcing Guidelines — vendor AI tools subject to same scrutiny; due diligence | ✓ Art.28 — obligations for deployers of third-party high-risk AI; liability chain | ✓ Govern — third-party risk in AI supply chain; Map — vendor context | ✓ BNM Outsourcing — AWS/Azure subject to outsourcing framework | ✓ §8.6 — supplier and third-party management for AI systems | D7 |
| Basel III Op Risk Linkage | ✓ MAS Notice 637 — operational risk capital; AI model failures are operational risk events | ⚠ Indirect — EU AI Act liabilities could trigger Basel Op Risk capital add-ons | ⚠ Partial — NIST manages risk but doesn't address capital adequacy | ✓ BNM Capital Adequacy — operational risk framework covers AI failures | ⚠ Not directly addressed — ISO 42001 is a management system, not capital framework | D7 |
Like CMMI for software capability, the ACAI Maturity Model gives institutions a structured progression path. Most ASEAN Organizations are at Level 1 today. 12-month target is Level 3. Level 5 is the 3-year strategic horizon.
Hypothesis-tested. Nowhere else on the internet. The ideas that separate institutions building moats from those scrambling to catch up.
Every agent carries a cryptographically-signed W3C Verifiable Credential embedding identity, permissions, validation history, and kill-switch status. Systems refuse unauthenticated agents. Enables cross-bank agent trust in COSMIC-style networks without sharing raw data.
Borrow from aviation: every agent has a black-box CFR capturing the last 30 minutes of full Chain-of-Thought reasoning. When a compliance event occurs, investigators reconstruct the exact decision path. Write-once immutable store with regulatory key escrow held by neutral third party.
Replace the slow AI Ethics Committee with a real-time Agent Senate — a governance body where Legal, Risk, Technology, Business, and Compliance each hold quorum veto power. Meets daily for 15 minutes using an AI-summarised brief prepared by the Orchestrator Agent itself.
Every regulatory metric defined once, governed centrally, served to any agent, dashboard, or regulator via a single API. Agents never compute metrics independently — they query the semantic layer. Eliminates conflicting regulatory numbers across teams. Gives regulators a single consistent data interface.
Standing red-team "attacker agents" continuously probing production agents for prompt injection, jailbreak attempts, reward hacking, emergent behaviour, and policy bypass. AART finds vulnerabilities and auto-generates patch proposals routed to HITL. Continuous pen-testing for agentic systems — the first bank to operationalise AART sets the MAS model risk benchmark.
Quarterly Agent Performance Review identical in rigour to human performance management. Evaluate: decision accuracy, FPR trend, escalation frequency, policy violation rate, fairness metrics. Underperforming agents are retrained or decommissioned. APR feeds directly into the AI Risk Register and model re-validation pipeline. The first institution to do this formally defines the ASEAN standard.
Deploy compliance enforcement as a sidecar to every agent — a compliance mesh. Each sidecar holds the latest policy bundle (synced via GitOps), evaluates actions locally, and logs independently. If the central control plane goes down, agents continue within their last-known valid policy state. Zero single point of compliance failure. Architecturally equivalent to Istio service mesh for microservices.
A compliance-aware agent in ServiceNow. When any incident is raised: (1) classifies regulatory impact, (2) assigns compliance risk rating P1–P4, (3) notifies CCO if P1 with EU Art.62 SLA triggered, (4) generates draft regulatory notification, (5) updates AI Risk Register. Closes the ITSM–Compliance loop that is currently 100% manual in every ASEAN bank.
Build a supervised, read-only agent interface for MAS/BNM supervisors. Regulators query compliance data in natural language: "Show me all AML cases involving crypto off-ramps in Q1 2026 with human-escalated outcomes and SAR filing times." The agent surfaces structured evidence with full citations to source data. Reduces examination burden by 60%+. Positions the institution as the most transparent bank in ASEAN — a trust moat that is essentially impossible to replicate without building everything in this playbook first.
Before building anything, find everything that already exists. Scan your full technology estate for autonomous AI: embedded vendor models (Salesforce Einstein, Oracle AI, SAP AI Core), internally-built scripts with LLM API calls, Microsoft Copilot integrations accessing production data. Build your Agent Registry from zero — this is your compliance ground truth. Most ASEAN Organizations discover 40–80 unregistered shadow agents in this 2-week sprint alone. Document owner, scope, risk tier, and data access for each.
Establish the Agent Senate (5-person cross-functional governance body). Define the NHI identity schema and extend your IAM platform (CyberArk, SailPoint) to manage agent identities. Define the three-tier access model (Read / Write-Restricted / Execute-HITL). Configure JIT token provisioning for Tier 2/3. Publish the Agent Lifecycle Policy: who can create an agent, who must approve, what validation is required. This is the governance operating model — get it right before the first production agent deploys.
Convert your top 20 most-referenced compliance rules into machine-enforceable OPA/Rego policies. Simultaneously, build the Compliance Semantic Layer — define your 30 core compliance metrics as versioned, governed definitions accessible via a single API. Ingest your authoritative regulatory corpus (MAS, BNM, FATF, BCBS) into a RAG vector store. This is the intelligence substrate every agent will query at runtime. Get this right and every subsequent agent is 3× faster to build.
First production agent should be AML alert triage: highest volume, clearest ground truth, most measurable outcome, safest failure mode. Build with LangGraph. Wire to your TMS via MCP tool connector. Implement CoT logging via LangSmith or Arize Phoenix. Deploy in shadow mode for minimum 6 weeks. Target: ≥85% alignment with human decisions. Human compliance officers review outputs daily and provide feedback labels — this becomes your continuous training signal.
Run structured red-team exercises: prompt injection via customer name fields and transaction narratives, adversarial transaction patterns, conflicting-rule scenarios, PEP edge cases. Engage Model Risk Management for independent validation aligned to MAS TRM and SR 11-7. Conduct initial fairness assessment across demographic groups. Brief the MAS Innovation Hub (Singapore) or file BNM i-RaPS (Malaysia) notification. Proactive regulator engagement at this stage is a compliance asset — not a liability.
Go-live at 20% traffic, scaling to 100% over 4 weeks. Maintain parallel human capacity for 90 days. Activate: Compliance Flight Recorder, circuit-breaker thresholds (auto-pause if FPR exceeds 15% above baseline), real-time compliance posture dashboard for CCO, and APR calendar (first APR at Day 90). Register the agent in your AI Risk Register. Begin building agent #2 (KYC refresh) using the same ADLC blueprint — the second agent takes half the time. Communicate to your Board: you are now a governed agentic compliance institution.