سامي
سامي الغامدي
مستشار Fyntralink · متاح الآن
مدعوم بالذكاء الاصطناعي · Fyntralink

FastGPT SSRF (CVE-2026-44286): AI Agent Risk to SAMA Banks

Two new FastGPT vulnerabilities disclosed May 8, 2026 (CVE-2026-44286 unauthenticated SSRF and CVE-2026-44284 MCP toolset bypass) put Saudi banks experimenting with AI agents at risk of internal network pivoting and metadata theft.

F
FyntraLink Team

On May 8, 2026 the FastGPT project disclosed two server-side request forgery vulnerabilities — CVE-2026-44286 and CVE-2026-44284 — that turn a popular open-source AI agent platform into an internal-network attack pivot. For Saudi financial institutions running early LLM and agentic AI pilots, the flaws collide directly with SAMA Cyber Security Framework expectations on emerging technology controls.

Inside CVE-2026-44286 and CVE-2026-44284

FastGPT is a self-hosted AI agent building platform widely used to wrap private LLMs around enterprise documents and internal APIs. CVE-2026-44286 is an unauthenticated SSRF that lets a remote attacker force the FastGPT backend to issue arbitrary HTTP requests to internal IP ranges, cloud metadata endpoints (169.254.169.254), and any service reachable from the FastGPT host. CVE-2026-44284 sits in the MCP (Model Context Protocol) toolset endpoints — specifically the create and update flows that accept a URL field. The platform stores the supplied URL once, then re-uses it during workflow execution without revalidating whether the destination is internal or private. Both issues are fixed in FastGPT 4.14.17.

Why an AI Platform Bug Becomes a Network Compromise

An SSRF in any modern application is dangerous, but inside an AI agent platform the blast radius widens. Agentic workflows are designed to fetch URLs, call tools, and chain external responses into prompts. The MCP toolset abuse path in CVE-2026-44284 is particularly nasty: an attacker with low-privileged tool-creation rights stores an internal URL such as http://10.0.0.5:6379 or http://internal-vault.bank.local once, and every subsequent agent run silently hits that endpoint with the FastGPT service identity. Cloud-deployed instances become attractive targets for IMDS credential theft, while on-premises deployments expose Redis, MongoDB, internal admin consoles, and unauthenticated APIs sitting behind the firewall.

Impact on Saudi Financial Institutions

SAMA-regulated banks are under increasing competitive pressure to deploy generative AI for customer service, internal knowledge bases, and AML investigation support. Many of these pilots run on open-source stacks like FastGPT, LangChain, and n8n in segregated environments — environments that often still need network reachability to identity providers, document stores, and core banking APIs to be useful. SAMA CSCC controls 3.3.5 (Application Security) and 3.3.14 (Emerging Technology Security), together with NCA ECC subdomain 2-15 on web application security, expect institutions to enforce input validation, network segmentation, and approved baseline configurations on any system processing organizational data — including AI middleware. PDPL adds the requirement to prevent unauthorized disclosure of personal data that may be cached, embedded, or fetched by these agents. A successful SSRF against a FastGPT instance pulling customer chat histories or KYC documents is simultaneously a CSCC, ECC, and PDPL incident.

Recommendations and Practical Steps

  1. Inventory every FastGPT, LangFlow, n8n, and similar AI orchestration deployment — including shadow-IT instances spun up by data science teams — and confirm versions. Upgrade FastGPT to 4.14.17 or later immediately.
  2. Place all AI agent platforms behind an egress proxy that blocks RFC1918 ranges, link-local 169.254.0.0/16, and your cloud provider's IMDS endpoint. Default-deny outbound, then allowlist only the LLM API and required SaaS destinations.
  3. Run FastGPT under a dedicated service account with no IAM role granting access to S3, Secrets Manager, or KMS — assume the agent will be used as an SSRF pivot.
  4. Audit existing MCP toolset entries for stored URLs pointing to internal hosts; treat any unexpected internal URL as a potential indicator of compromise and review FastGPT access logs for the affected endpoints.
  5. Add AI middleware to your SAMA CSCC vulnerability management cadence (control 3.3.10) — these platforms move fast and CVE flow will accelerate through 2026.
  6. Enforce authentication and MFA on every administrative interface of self-hosted AI tooling, and integrate the platform's audit trail into your SOC use cases for unusual outbound HTTP from the AI backend.

Conclusion

The FastGPT disclosures are an early signal of a broader pattern: AI agent platforms are becoming high-value perimeter assets, and their vulnerabilities will increasingly serve as entry points into core banking networks. Treat every LLM gateway, agent runner, and MCP server as a Tier-1 application — because attackers already do.

Is your organization prepared? Contact Fyntralink for a complimentary SAMA Cyber Maturity Assessment covering your AI and emerging-technology stack.