سامي
سامي الغامدي
مستشار Fyntralink · متاح الآن
مدعوم بالذكاء الاصطناعي · Fyntralink

Shadow AI: The Invisible Threat Inside Saudi Banks That No SAMA Audit Will Catch

Three in four CISOs globally have already found unsanctioned AI tools running inside their organizations. In Saudi financial institutions, that means confidential customer data, earnings projections, and audit records may be flowing into AI platforms with no DPA, no access control, and no SAMA oversight.

F
FyntraLink Team

Three out of four CISOs have already discovered unsanctioned generative AI tools running inside their organizations — and in Saudi financial institutions, the stakes are uniquely high. When a credit analyst pastes customer repayment data into an external AI assistant to draft a report, or a compliance officer uploads draft audit findings to an AI summarizer not approved by IT, sensitive data crosses into infrastructure that no security team controls, no SAMA audit evaluates, and no PDPL data processing agreement covers.

What Is Shadow AI — And Why Is It Different from Shadow IT?

Shadow IT has existed for decades: employees spinning up unauthorized cloud storage, communication tools, or SaaS applications. Shadow AI is a fundamentally different threat vector. Unlike a file-sharing app that merely stores data, generative AI platforms actively process, analyze, and in many cases retain or train on the data submitted to them. An employee using an unapproved ChatGPT instance or a third-party AI summarizer is not just storing data externally — they are feeding it into inference engines whose data handling policies are opaque, jurisdiction may be outside Saudi Arabia, and whose logs are inaccessible to the organization's SOC.

According to the 2026 CISO AI Risk Report, 86% of employees now use AI tools at least weekly for work tasks, with a significant portion using unapproved versions of those tools. Shadow AI has overtaken misconfigured cloud storage as the most common entry point for unintentional data leakage across the financial sector.

The Data That Is Already Leaving Saudi Banks

BlackFog's April 2026 research quantifies what CISOs fear most: 33% of employees have shared internal research or proprietary datasets with unauthorized AI tools; 27% have shared employee data including payroll and performance records; and 23% have directly shared financial statements or sales data. In the context of a Saudi bank or insurance company, these figures translate to customer KYC records, transaction histories, risk appetite frameworks, and regulatory filing drafts potentially residing on servers that the institution has never audited and cannot reach.

A particularly dangerous pattern is emerging in financial institutions running Microsoft 365 Copilot pilots. Employees granted access to Copilot without a structured data governance program often use it to query data lakes and SharePoint sites that contain sensitive customer records — crossing internal data classification boundaries in seconds, without triggering any DLP alert. When Copilot is used within approved guardrails it is a powerful productivity tool; when deployed without proper scoping, it becomes the fastest internal data breach vector a CISO has ever seen.

The SAMA CSCC and PDPL Compliance Gap No One Is Auditing

SAMA's Cybersecurity Framework (CSCC) is explicit on data governance under Domain 3: member organizations must maintain an inventory of all information assets, enforce data classification, and ensure access to sensitive data is controlled and logged. Shadow AI tools sit entirely outside this inventory. There is no asset record, no classification tag, no access log, and no incident response playbook for a breach originating from an external AI platform that processed data an employee pasted manually.

Saudi Arabia's Personal Data Protection Law (PDPL), as amended in 2024, adds a harder compliance edge. Article 14 prohibits the transfer of personal data outside the Kingdom without explicit controller authorization and a verified adequacy finding or contractual safeguard. When an employee submits customer personal data — even a sample — to a cloud-hosted AI tool headquartered outside Saudi Arabia, without a Data Processing Agreement that satisfies PDPL requirements, the institution is in violation. That violation does not appear on any SAMA inspection checklist, because no one has mapped the tool to the data flow. The NCA's Essential Cybersecurity Controls (ECC-1:2018) compound the exposure: Control 3.3.1 requires organizations to document and approve all software and services processing organizational data, with a formal risk acceptance process for exceptions. Shadow AI tools, by definition, have bypassed this control entirely.

How to Identify Shadow AI in Your Environment

Detection requires a layered approach combining technology and behavioral analysis. Start with DNS and proxy log analysis: look for high-frequency queries to known AI platform domains — api.openai.com, claude.ai, gemini.google.com, perplexity.ai, poe.com — originating from user workstations rather than sanctioned integration servers. Any traffic to these endpoints from endpoints that have not been through IT's AI onboarding process is a shadow AI signal worth investigating.

Cloud Access Security Brokers (CASBs) such as Microsoft Defender for Cloud Apps and Netskope provide AI-specific discovery features that categorize SaaS applications by AI capability and data handling risk. SaaS Security Posture Management (SSPM) tools from vendors like Obsidian Security and Grip Security can identify OAuth grants made by employees to third-party AI applications — often the hidden integration layer where data silently flows out. Finally, DLP policies should be updated to flag large text blocks, financial data patterns (IBAN formats, CVR numbers, account IDs), and document uploads directed at AI API endpoints.

Practical Recommendations for Saudi Financial Institutions

  1. Build an AI Application Registry: Mandate that all AI tools — including Copilot, Claude for Work, and any embedded AI features in existing SaaS — go through a security and PDPL compliance review before employee use. Map each approved tool to data classification levels it is permitted to process.
  2. Deploy a CASB with AI Visibility: Configure your CASB to block unapproved generative AI platforms at the network layer for all endpoints handling Tier 1 or Tier 2 classified data under SAMA CSCC's data classification model.
  3. Establish AI-Specific DPA Templates: Work with legal and compliance to create standard Data Processing Agreement templates that satisfy PDPL Article 14 requirements for any AI vendor processing Saudi customer data from outside the Kingdom.
  4. Govern Non-Human Identities (NHIs): AI tools authorized through OAuth tokens and API keys create non-human identities that accumulate permissions over time. Apply the same identity governance rigor to AI service accounts that you apply to privileged human users — quarterly review, least-privilege enforcement, and automated revocation on detected anomaly.
  5. Run a Shadow AI Red Team Exercise: Simulate shadow AI exfiltration scenarios in your next tabletop exercise: an employee pastes customer IBAN data into an external AI tool; the tool retains it for training. Map the detection gap from first data submission to SOC alert. Most institutions discover the gap is measured in months, not hours.
  6. Train Employees on AI Data Hygiene: Security awareness programs in Saudi banks still largely focus on phishing. Add a dedicated module on AI data handling: what data must never enter any external AI tool, how to use approved AI within classification boundaries, and how to report suspected shadow AI use by colleagues.

Conclusion

Shadow AI is not a future threat — it is already inside Saudi financial institutions, processing data without authorization, generating PDPL exposure with every query, and creating blind spots in SAMA CSCC compliance posture that no external auditor will catch until a breach forces the conversation. The window to get ahead of this is narrow: as generative AI adoption accelerates across the sector, the volume and sensitivity of data flowing through unsanctioned channels will only grow. The institutions that build AI governance frameworks now — asset registries, DPAs, CASB controls, and NHI policies — will be the ones that demonstrate SAMA maturity when the first regulatory guidance on AI data handling lands.

Is your organization prepared? Contact Fyntralink for a complimentary SAMA Cyber Maturity Assessment that includes a Shadow AI discovery and governance gap review tailored to SAMA CSCC Domain 3 and PDPL requirements.