سامي
سامي الغامدي
مستشار Fyntralink · متاح الآن
مدعوم بالذكاء الاصطناعي · Fyntralink

Agentic AI Risks for Saudi Banks: Five Eyes Guidance Decoded

On April 30, 2026, six Five Eyes cyber agencies released joint guidance on agentic AI security risks. Saudi banks scaling autonomous AI must align with SAMA CSCC and NCA ECC before granting agents broader authority.

F
FyntraLink Team

On April 30, 2026, the Five Eyes cyber alliance — ASD's ACSC, CISA, NSA, the UK's NCSC, Canada's Cyber Centre, and NCSC-NZ — released joint guidance titled "Careful Adoption of Agentic AI Services." For Saudi banks racing to deploy autonomous AI agents in fraud triage, KYC remediation, customer service and SOC operations, the document is not a footnote. It is a clear signal that the regulatory and operational guardrails for agentic AI are now codified — and SAMA-supervised institutions must respond before scaling further.

Why Agentic AI Is Different From GenAI Your Bank Already Runs

Agentic AI systems do not merely generate text or summarize documents. They take autonomous actions across interconnected tools, APIs, databases and downstream services. A fraud-investigation agent can query a core banking system, freeze a card, open a SAR draft and message a relationship manager — all without human approval at each step. The Five Eyes guidance specifically warns that this delegated authority creates four risk categories absent from traditional GenAI: privilege escalation through chained tool use, emergent behaviors during multi-step planning, structural dependencies on third-party model providers, and accountability gaps when an agent acts outside its intended scope.

For a Saudi bank, the practical translation is uncomfortable. An agent with read-write access to a payment hub, an internal LLM with tool-use enabled and a vector store of customer documents is a single prompt-injection away from data exfiltration or unauthorized transactions — none of which traditional DLP or transaction monitoring controls were designed to detect.

The Four Mitigation Pillars Every CISO Should Operationalize

The joint guidance crystallizes mitigation into four areas: align agentic AI risk to your existing security model, never grant broad or unrestricted access, deploy incrementally beginning with low-risk tasks, and continuously assess against evolving threat models. In operational terms, this means treating each AI agent as a non-human identity with its own privileged access management lifecycle — provisioning, JIT entitlements, session recording, and scheduled access reviews. It also means architecting agents behind a policy enforcement gateway that validates every tool call against an allow-list before execution, rather than trusting the model's reasoning chain.

The guidance is explicit that traditional sandboxing is insufficient. Agents must be observable end-to-end, with every plan step, tool invocation and intermediate output logged in a tamper-evident store. This is closer to financial transaction logging than to application telemetry — and that framing is intentional.

Impact on Saudi Financial Institutions Under SAMA and NCA Oversight

SAMA's Cyber Security Framework and the CSCC subdomains on Identity & Access Management (3.3), Application Security (3.4) and Third-Party Cybersecurity (3.7) all apply to agentic AI deployments, but the framework was not written with autonomous agents in mind. The most direct mapping is to NCA ECC subcontrol 2-6 (Cybersecurity in Information System Acquisition, Development and Maintenance) and 2-10 (Third-Party and Cloud Computing Cybersecurity) — both require risk assessments before production deployment of new technologies. A bank that has rolled out a customer-facing AI agent without documenting the agent's tool inventory, data classification, and human-in-the-loop checkpoints is already non-compliant under a strict reading of ECC.

Add PDPL Article 18 (data subject rights to object to automated decisions) and the regulatory exposure compounds. An agentic loan-screening assistant that auto-rejects an applicant without a documented human review path is a regulatory finding waiting to happen. Saudi banks should also expect SAMA to issue a supervisory expectation on AI governance within the next 12 months, mirroring the Bank of England's SS1/23 trajectory.

Recommendations and Practical Steps

  1. Publish an internal Agentic AI Acceptable Use Standard within 60 days, mapped to SAMA CSCC 3.4 and NCA ECC 2-6, defining approved use cases, prohibited tool categories (e.g., direct database write access, payment initiation) and the human-in-the-loop matrix.
  2. Inventory every agent currently in pilot or production. For each, document the model provider, system prompt version, tool list, data sources accessed, and the business owner accountable under the Three Lines of Defense model.
  3. Implement a policy enforcement gateway between agents and downstream tools. Open-source options like Pomerium or commercial offerings from Cisco AI Defense and Palo Alto AI Runtime Security can broker every tool call.
  4. Treat agent identities as privileged service accounts. Rotate credentials, scope IAM roles to the minimum required, and route all activity through your PAM solution (CyberArk, BeyondTrust, Delinea) for session recording.
  5. Red-team your agents against the OWASP Top 10 for LLM Applications and the new MITRE ATLAS techniques covering prompt injection, tool poisoning and excessive agency. Document findings and remediation in your annual NCA ECC compliance file.
  6. Establish a human review threshold for any agent action exceeding a defined risk score — typically any write operation against a customer record, a financial transaction, or a regulatory filing.

Conclusion

The Five Eyes guidance is not the last word on agentic AI security, but it is the first authoritative cross-jurisdictional baseline. Saudi banks that integrate it into their existing SAMA and NCA control libraries — rather than treating it as a separate AI governance silo — will move faster, audit cleaner, and avoid the painful retrofits that always follow regulator-led enforcement.

Is your organization prepared? Contact Fyntralink for a complimentary SAMA Cyber Maturity Assessment that includes an agentic AI readiness review mapped to CSCC and NCA ECC controls.