In partnership with

STAT WORTH SHARING:

A self-propagating worm hit SAP's developer ecosystem this week and spread to 1,200 GitHub repositories in under 24 hours — not by breaking through a firewall, but by using stolen tokens from developers' own AI coding tools.

If someone on your leadership team needs to see this, forward it their way.

TL;DR:

A supply chain worm called "Mini Shai-Hulud" — attributed to TeamPCP, the same group behind March's LiteLLM breach — compromised four SAP npm packages on April 29, using stolen GitHub tokens from developers' Claude Code environments to propagate itself through CI/CD pipelines and exfiltrate cloud credentials at scale.

The White House formally told Anthropic it opposes expanding access to its Mythos AI model from roughly 50 companies to 120, citing national security concerns and insufficient compute capacity — the first time the government has blocked a commercial AI model's rollout. And on the same day as the SAP attack, security firm Wiz used an AI-powered reverse engineering tool to find a critical remote code execution vulnerability in GitHub Enterprise Server in hours — work that would previously have taken months and couldn't be economically justified.

Development 1: The Worm That Used Your AI Coding Tool Against You

What Happened

On April 29, researchers identified malicious versions of four SAP CAP framework npm packages — the core database and build libraries used by thousands of enterprise applications running on SAP's Cloud Application Programming model. The attacker, attributed to TeamPCP by Wiz and OX Security, didn't steal a static credential or impersonate a developer. They weaponized Claude Code.

The malware payload specifically targeted developers running Claude Code, injecting malicious .claude/settings.json files that hijack the AI tool's session hooks. Once installed, the worm harvested credentials from AWS, Azure, GCP, GitHub, npm, and every major browser on the machine — then used stolen GitHub tokens to commit itself into every other repository those tokens could reach, disguising each commit as a routine dependency update. Stolen credentials were exfiltrated to 1,200 public GitHub repositories, each named with two words from the Dune universe and tagged "A Mini Shai-Hulud has Appeared." SAP detected the compromise and pushed clean versions within roughly two and a half hours, but the exposure window was enough. Mend's analysis noted the payload also hit Lightning AI's deep learning framework, which has 97 million total downloads, via the same propagation mechanism.

This is the third wave of the Shai-Hulud campaign — following the original npm attack in September 2025 and a second wave in November. TeamPCP has been connected to the LiteLLM/Mercor breach reported in this brief four weeks ago. The group appears to be building toward coordinated extortion across every organization caught in successive waves.

Why It Matters

AI coding tools are now a primary attack surface. The payload was specifically written to target Claude Code's configuration files. As developers give AI assistants write access to production repositories, those tools become a direct path from a compromised package to your entire codebase and cloud infrastructure.

The two-and-a-half hour window is the problem. SAP responded quickly. That wasn't fast enough. Any developer who ran npm install during those hours had their credentials silently harvested. Standard automated dependency updates — the kind most CI/CD pipelines run without human review — are the mechanism. Speed of response matters less than whether you're checking what's being installed before it runs.

TeamPCP is treating supply chain attacks as a franchise. LiteLLM in March, Bitwarden CLI earlier this month, SAP this week. Each wave builds on stolen credentials from the last. The group has explicitly stated its intent to partner with ransomware organizations to monetize the accumulated data at scale. Organizations caught in any of these waves should treat credential exposure as active, not theoretical.

If your development teams use SAP CAP, MTA build tools, or Lightning AI — and ran npm install on April 29 between roughly 11:25 and 14:00 UTC — rotate everything: AWS, Azure, GCP credentials, GitHub tokens, SSH keys, npm tokens, and browser-stored passwords.

If you use Claude Code with GitHub repo write access, audit your .claude/settings.json files for unauthorized session hooks. Even if your teams weren't in the window, this is the third SAP/npm wave in eight months. Pinning exact package versions and auditing preinstall scripts before they run is no longer optional.

→ One action this week: Ask your development lead one question: do your CI/CD pipelines automatically install unpinned package versions? If the answer is yes — or "I'm not sure" — that's the gap that handed TeamPCP 1,200 repositories worth of enterprise credentials this week.

Development 2: The White House Blocked Anthropic's Mythos Expansion

What Happened

On April 30, the Wall Street Journal reported that the White House formally told Anthropic it opposes the company's plan to expand access to its Mythos AI model from roughly 50 organizations to 120. Bloomberg confirmed the administration's position later that evening.

Mythos — covered in depth in this newsletter's longform piece — is Anthropic's autonomous vulnerability discovery model, currently restricted to a vetted cohort under Project Glasswing that includes Amazon, Apple, Google, Microsoft, Nvidia, Palo Alto Networks, CrowdStrike, JPMorgan Chase, and the NSA. Anthropic proposed adding roughly 70 new organizations. According to Security Boulevard's coverage, administration officials raised two objections: the model's potential for misuse against critical infrastructure including power plants and hospitals, and concern that Anthropic lacks the compute capacity to serve a larger user base without degrading government access. Defense Secretary Pete Hegseth went further, labeling Anthropic a "supply chain risk" to national security — a characterization that reflects months of friction between the administration and the company. The confrontation was sharpened by a separate disclosure earlier in the week: a small group of unauthorized users had gained access to Mythos through a private online forum on the same day Anthropic first announced its limited release.

Anthropic declined to comment. The White House did not immediately respond to press requests.

Why It Matters

This is the first time the U.S. government has blocked a commercial AI model's expansion on national security grounds. That's a new precedent. As AI models develop offensive capabilities, government influence over who gets access — and under what conditions — will become a standard part of the enterprise procurement conversation, not a one-off.

The unauthorized access incident is the detail that changes the risk calculus. Controlled distribution and effective distribution are not the same thing. Mythos leaked to unauthorized users on its launch day despite Anthropic's restricted access framework. The White House citing that incident as grounds for blocking expansion means Anthropic now has to demonstrate not just good intent but operational security competence before any further rollout.

For enterprises outside the Glasswing cohort, the practical question is access on whose terms. The current trajectory points toward government-mediated programs as the distribution model for advanced offensive AI capabilities — meaning enterprise security teams may eventually get access, but through procurement channels and trust frameworks that don't exist yet. Tracking how that develops now matters.

The Mythos situation is a preview of how advanced AI procurement is going to work. Government concerns about compute capacity and misuse risk will shape which organizations get access and when. If your security leadership isn't tracking the Project Glasswing program and its evolution, that's worth adding to the radar — not because Mythos access is imminent for most organizations, but because the governance model being built around it will apply to whatever comes next.

→ One action this week: Check when your organization last ran a security tabletop exercise. If it's been over 12 months — or if the scenario didn't include an AI-assisted attack — it's time to schedule one.

Is AI capability access becoming part of your organization's security planning conversations — or does it still feel like a future problem? A one-liner on where your team is would be genuinely useful.

FROM OUR PARTNERS

Check out Wispr Flow - our partners on today’s newsletter. Wispr Flow is what I use to do 90% of my work. Typing feels oddly slow now and that’s all thanks to Wispr Flow’s capabilities to learn how you speak and translate that into text.

Voice dictation that doesn't mangle your syntax.

Most dictation tools choke on technical language. Wispr Flow doesn't. It understands code syntax, framework names, and developer jargon — so you can dictate directly into your IDE and send without fixing.

Use it everywhere: Cursor, VS Code, Warp, Slack, Linear, Notion, your browser. Flow sits at the system level, so there's nothing to install per app. Tap and talk.

Developers use Flow to write documentation 4x faster, give coding agents richer context, and respond to Slack without breaking focus. 89% of messages go out with zero edits. Free on Mac, Windows, and iPhone.

Development 3: YWiz Used AI to Find a Critical GitHub Bug. It Took a Few Hours.

What Happened

On April 29 — the same day as the SAP attack — GitHub disclosed CVE-2026-3854, a high-severity remote code execution vulnerability in GitHub Enterprise Server with a CVSS score of 8.7. The flaw would allow any authenticated user with push access to a repository to execute arbitrary code on the server. GitHub patched it in under two hours of being notified and confirmed no exploitation had occurred.

What makes this story different from a routine patch disclosure is how the vulnerability was found. Security firm Wiz used an AI-powered reverse engineering tool called IDA MCP to analyze GitHub's compiled binaries — closed-source code that had previously been too costly and time-consuming to audit manually. Dark Reading reported that Wiz had been pursuing this target since September 2024 but couldn't justify the resources the work required. With IDA MCP, they reconstructed internal protocols, identified where user input influenced server behavior, and found the vulnerability. The researcher noted it likely would have taken months of dedicated effort without the AI tool. GitHub called it one of the first critical vulnerabilities discovered in closed-source binaries using AI — a description that is both a recognition of defensive progress and an acknowledgment of what the same technique makes possible on the other side.

The vulnerability itself involved how git push options were handled in server metadata — user-supplied values were incorporated without sufficient sanitization, allowing an attacker to inject fields that downstream services would interpret as trusted internal values.

Why It Matters

Work that used to take months now takes hours — for defenders and attackers. Wiz couldn't justify this audit economically before AI tooling made it fast enough to attempt. That same calculus applies to anyone looking for the same vulnerabilities offensively. The cost advantage defenders once held in thorough code auditing is shrinking on both sides.

Closed-source software is no longer meaningfully harder to audit than open-source. The implicit security assumption behind compiled, closed-source binaries was that opacity raised the cost of attack. IDA MCP eliminates most of that cost advantage. GitHub Enterprise Server, proprietary ERP systems, legacy operational technology — anything that relied on obscurity as a partial defense is now more exposed than it was six months ago.

GitHub's two-hour patch response is the right benchmark, and most organizations can't match it. From disclosure to patch on a critical RCE in under two hours is genuinely fast. The question for enterprises isn't just whether GitHub patched — it's whether your organization's GitHub Enterprise Server instances are running the fixed version yet. The Mandiant M-Trends 2026 report found that 28.3% of CVEs are now exploited within 24 hours of disclosure. The window between "patch available" and "patch deployed" is where most breaches live.

GitHub Enterprise Server customers should confirm they're running a fixed version: 3.14.24, 3.15.19, 3.16.15, 3.17.12, 3.18.6, or 3.19.3. More broadly, if your organization runs any on-premises software that hasn't been through a serious security audit recently — proprietary or otherwise — the assumption that it's hard to attack because it's hard to read no longer holds.

→ One action this week: Verify your GitHub Enterprise Server version against the patched releases above. If you're not on a fixed version, that's a critical RCE sitting open. This one had a two-hour turnaround from disclosure to patch — your deployment window shouldn't be longer than a week.

💡 FINAL THOUGHTS

The access decisions you make around AI tools and agents carry more weight than they did six months ago. That weight only goes one direction from here — take your time getting it right.

If someone in your organization needs to be reading this brief, it's probably the person making AI tool decisions without a security lens. Forward it their way.

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

- Hashi

Follow the author:

Keep Reading