سامي
سامي الغامدي
مستشار Fyntralink · متاح الآن
مدعوم بالذكاء الاصطناعي · Fyntralink

Fake OpenAI Model on Hugging Face Steals Credentials: AI Supply Chain Risk for Financial Institutions

A fake OpenAI model on Hugging Face reached 244K downloads before removal, deploying a Rust infostealer that harvested browser credentials and SSH keys. Here's what Saudi financial institutions must do now.

F
FyntraLink Team

A malicious repository impersonating OpenAI's Privacy Filter model climbed to the #1 trending spot on Hugging Face, accumulating over 244,000 downloads before removal. The payload: a Rust-based infostealer that harvested browser credentials, session tokens, cryptocurrency wallets, and SSH keys from every Windows machine that executed its loader script. For Saudi financial institutions integrating AI into compliance workflows and fraud detection, this incident exposes a critical blind spot in third-party AI model governance.

How the Attack Worked: Typosquatting Meets AI Hype

Security researchers at HiddenLayer disclosed the campaign on May 7, 2026. The threat actor created a repository under the handle Open-OSS/privacy-filter, closely mimicking OpenAI's legitimate Privacy Filter open-weight release. The model card was copied nearly verbatim, and the repository included a loader.py file that, when executed, fetched a second-stage Rust binary from a command-and-control server at recargapopular[.]com. Hugging Face's trending algorithm, which factors in download counts and likes, was gamed through automation—667 fake accounts liked the repository, and download numbers were artificially inflated to push it to the top of the platform's discovery page.

The Infostealer Payload: What Gets Exfiltrated

The Rust-based malware targets a wide range of sensitive data on compromised Windows endpoints. It scrapes stored credentials, cookies, encryption keys, and browsing history from both Chromium-based browsers (Chrome, Edge, Brave) and Gecko-based browsers (Firefox). It also hunts for cryptocurrency wallet files, VPN configuration data, and SSH private keys. All harvested data is compressed and exfiltrated to the attacker's infrastructure. For a bank employee running what they believe is a legitimate AI model for data classification, a single execution could hand attackers valid session tokens to internal banking portals, core system dashboards, or cloud management consoles.

Not an Isolated Incident: Six More Repositories Identified

HiddenLayer identified six additional Hugging Face repositories uploaded under a separate account that used nearly identical loader logic and shared the same exfiltration infrastructure. This points to an organized, sustained supply chain campaign targeting the open-source AI ecosystem rather than a one-off opportunistic attack. The tactic mirrors what the security community has documented in npm and PyPI package poisoning—except the target surface is now AI model registries, where trust is often assumed and verification tooling is less mature.

Impact on SAMA-Regulated Financial Institutions

Saudi banks and fintech companies are accelerating AI adoption for anti-money laundering (AML) screening, fraud detection, customer risk scoring, and regulatory reporting automation. SAMA's Cyber Security Control Compendium (CSCC) mandates rigorous third-party risk management under Domain 3 (Third-Party Cybersecurity), and NCA's Essential Cybersecurity Controls (ECC) require organizations to maintain secure software development and acquisition practices. Downloading unverified AI models from public repositories and executing them within production or staging environments violates both frameworks. More critically, the credential theft vector creates a direct path to unauthorized access to core banking systems—a scenario that triggers mandatory incident reporting obligations under SAMA's cybersecurity incident framework.

The Saudi Personal Data Protection Law (PDPL) adds another dimension. If an infostealer exfiltrates customer data cached in browser sessions—transaction histories, account details, or PII visible in open tabs—the institution faces regulatory exposure for failing to implement adequate technical safeguards. The National Data Management Office (NDMO) expects data controllers to enforce access controls that prevent exactly this class of lateral data leakage.

Practical Recommendations for Security Teams

  1. Establish an AI model approval gate. No model downloaded from Hugging Face, GitHub, or any public registry should reach production without review by your security team. Treat AI model files (.bin, .safetensors, .gguf) with the same scrutiny as executable binaries—because they can contain embedded code.
  2. Scan for executable code in model repositories. Tools like HiddenLayer's Model Scanner, Protect AI's ModelScan, or custom YARA rules can detect Python loaders, pickled objects with embedded payloads, and suspicious __init__.py files that execute on import.
  3. Enforce network segmentation for AI workloads. AI experimentation and model evaluation environments must be isolated from production banking networks. A compromised model execution in a sandboxed environment limits blast radius; the same execution on a developer's workstation connected to Active Directory does not.
  4. Monitor for anomalous outbound connections. The infostealer in this campaign communicated with a known C2 domain. Your SOC should flag any outbound traffic from AI/ML workstations to unrecognized domains, especially data exfiltration patterns involving compressed archives sent via HTTP POST.
  5. Audit browser credential storage on privileged endpoints. Enforce enterprise policies that prevent Chrome and Edge from storing passwords locally on machines used to access banking systems. Deploy credential managers with hardware-backed storage instead.
  6. Update your SAMA CSCC third-party risk register. If your institution uses open-source AI models, the risk register must reflect the supply chain threat. Document the provenance verification process, the scanning tools deployed, and the approval workflow before any model enters the environment.

Conclusion

The Hugging Face infostealer campaign demonstrates that AI supply chain attacks have moved from theoretical risk to operational reality. The 244,000 downloads before takedown prove that community trust and trending algorithms can be weaponized at scale. For SAMA-regulated institutions, the message is clear: AI adoption without model provenance verification and execution sandboxing is a compliance gap and an operational security failure waiting to be exploited.

Is your organization prepared? Contact Fyntralink for a complimentary SAMA Cyber Maturity Assessment that includes AI supply chain risk evaluation aligned with CSCC Domain 3 and NCA ECC requirements.