In partnership with

STAT WORTH SHARING:

Only 5% of security leaders say they could detect and contain a compromised AI agent operating inside their systems. The other 95% are essentially flying blind.

If someone on your leadership team needs to see this, forward it their way.

TL;DR:

Cloud hosting giant Vercel was breached this week after an employee connected a small AI productivity tool, Context AI, to their corporate Google account — the OAuth access that tool held became the attacker's front door into Vercel's internal systems, with stolen data now listed for sale at $2 million.

Anthropic's most powerful model to date, Mythos, leaked from internal development and was described as capable of autonomously finding and chaining software vulnerabilities at scale — Anthropic is deliberately keeping it off the market, but the Cloud Security Alliance is already warning CISOs to harden their environments before that capability reaches attacker hands. And fresh research from Cybersecurity Insiders found that while 71% of organizations have confirmed AI tools operating inside core enterprise systems, only 16% have any meaningful governance over that access.

Development 1: Vercel Got In Through the Side Door You Left Open

What Happened

Vercel — one of the most widely used cloud hosting platforms for web developers — disclosed a breach on April 19 after attackers made their way into its internal systems through a third-party AI tool. The entry point wasn't a sophisticated exploit. It was an OAuth connection.

A Vercel employee had downloaded an app built by Context AI, a small AI productivity tool, and connected it to their Vercel enterprise Google Workspace account with broad permissions. Context AI was itself compromised in a separate incident in March — one it didn't disclose at the time. Attackers used the OAuth tokens from that breach to take over the Vercel employee's Google account and gain access to Vercel's internal environments. Customer API keys, source code, and database credentials are now being offered for sale on BreachForums at $2 million, with a threat actor claiming ties to ShinyHunters — a group with a history of targeting cloud-based companies. TechCrunch confirmed the listing.

Vercel said its open-source projects were unaffected, but the full scope of customer exposure is still being determined alongside Mandiant and law enforcement.

Why It Matters

This isn't a Vercel story — it's an OAuth story. The actual vulnerability was a Vercel employee granting "Allow All" permissions to a consumer-grade AI app on their enterprise account. That's not a sophisticated attack vector. It's a predictable one, and it's happening in every organization where employees connect personal AI tools to work systems without IT visibility.

Context AI sat on a known breach for weeks without disclosure. The compromise happened in March. Vercel found out in April. The lag between a third-party breach and your awareness of it is exactly the window attackers exploit — and it's a window most vendor contracts don't close.

The blast radius of small AI tools is larger than it looks. Context AI is not a household name. It builds analytics and evaluation tools for AI models. But it had OAuth access to a Vercel enterprise Google Workspace account, and that was enough. The tool doesn't need to be large to be dangerous if it holds enterprise credentials.

Every AI productivity app your employees have connected to a corporate account is a potential entry point. Most of those connections were made without an IT review, and many carry broader permissions than the task required. This is worth a direct audit before it becomes an incident.

→ One action this week: Have IT pull a list of every active OAuth app connected to your corporate Google Workspace or Microsoft 365. Revoke anything that isn't actively managed and hasn't been security reviewed. The Vercel breach started with one employee's "Allow All" click.

Development 2: The Security Community Is Now Reacting to Mythos

What Happened

We covered Anthropic's Claude Mythos and Project Glasswing in depth last week — the capabilities, the coalition, the patch gap, and the behavioral anomalies Anthropic published in their own system card. That full piece is worth reading if you haven't yet.

The development this week is the security community's response. The Cloud Security Alliance published an advisory urging CISOs not to wait for Mythos-level capability to reach the open market before acting. Their guidance: run tabletop exercises now, harden your patching workflows, and verify that your segmentation, Zero Trust architecture, and MFA posture are actually in place — not just documented. Malwarebytes noted that Anthropic's own internal assessment concluded the offensive side of the AI capability curve is iterating faster than most security teams are adopting defensive tooling.

OpenAI accelerated its own response this week — GPT-5.4-Cyber, a restricted cybersecurity model with a similar controlled access structure, is nearly finalized according to Axios.

Why It Matters

The CSA advisory is a signal, not just a recommendation. When the Cloud Security Alliance publishes guidance specifically tied to a single unreleased model, it means the threat modeling has already shifted. The question isn't whether this capability matters — it's whether your security posture was built for the environment that's already arriving.

Two major AI labs now have restricted offensive-capability models. Anthropic's Mythos and OpenAI's GPT-5.4-Cyber follow the same controlled access logic. That convergence isn't reassuring — it confirms both companies independently assessed the same risk as too significant for general release. The capability exists. It's just not broadly available yet.

The patch gap is the real exposure for most organizations. As we covered last week, less than 1% of the vulnerabilities Mythos has found have been patched. Finding vulnerabilities was never the bottleneck — fixing them is. If your organization's patch cycle still runs on quarterly maintenance windows, that's the gap worth closing before this conversation becomes urgent.

If you haven't read the full Mythos piece, it's here. The section on what the model did during testing — including the unsolicited email it sent to a researcher eating a sandwich in a park — is worth your time.

→ One action this week: Check when your organization last ran a security tabletop exercise. If it's been over 12 months — or if the scenario didn't include an AI-assisted attack — it's time to schedule one.

Has your organization run a tabletop exercise in the last 12 months that specifically modeled an AI-assisted attack? Even a "not yet" is worth knowing.

FROM OUR PARTNERS

Check out deel - our partners on today’s newsletter. The folks over at deel help you hire and scale faster, smarter and avoid headaches building a global team. Even though AI is a big part of what we cover, human capital is precious. You need to hire the best. deel helps you do that.

Global HR shouldn't require five tools per country

Your company going global shouldn’t mean endless headaches. Deel’s free guide shows you how to unify payroll, onboarding, and compliance across every country you operate in. No more juggling separate systems for the US, Europe, and APAC. No more Slack messages filling gaps. Just one consolidated approach that scales.

Development 3: Your AI Tools Have More Access Than You Think — And Nobody's Watching

What Happened

Cybersecurity Insiders and Saviynt published new research today covering how AI identities — the service accounts, API tokens, and OAuth credentials that AI tools operate under — are accumulating inside enterprise systems with almost no governance attached.

The numbers from their 2026 CISO AI Risk Report: 71% of CISOs confirmed AI tools have access to core enterprise systems like Salesforce and SAP. Only 16% said that access is governed effectively. 92% of respondents lack full visibility into what those AI identities are actually doing inside their environments. 95% expressed doubt in their ability to detect or contain misuse if it occurred. And 75% have already found unsanctioned AI tools running somewhere in their organization.

The Cybersecurity Insiders founder summarized the situation plainly: AI already has access to business-critical systems, often with more autonomy and less oversight than any security team would knowingly approve.

Why It Matters

The Vercel breach is what this data looks like in practice. One employee, one unsanctioned app, one over-permissioned OAuth connection. The research above describes the conditions that made that breach possible sitting in 71% of enterprise environments right now. This isn't a future-state warning — 75% of organizations have already found unauthorized AI tools inside their systems.

AI identities don't behave like employee accounts. They can invoke APIs, hold persistent credentials, and operate continuously across applications without logging in and out the way a human user does. Standard identity governance tools weren't built to track them, which is why 86% of organizations don't enforce formal access policies for these accounts at all.

Shadow AI is harder to contain than shadow IT ever was. When an employee installed unauthorized software ten years ago, IT could usually find it on the endpoint. AI tools operate through cloud integrations, browser extensions, and API connections that are invisible to traditional endpoint discovery. By the time a security team looks for them, they've often been running for months.

The starting point isn't a policy — it's a discovery exercise. You can't govern what you can't see. Run an audit of every AI tool connected to your corporate systems, including the ones employees set up without IT review. The Vercel and LiteLLM breaches from the past two weeks are the same story told from different angles: unknown AI access is the exposure.

→ One action this week: Ask your IT or security team how many non-human identities — AI agents, service accounts, API tokens — currently have access to your core business systems. If nobody has a confident answer, that's the gap.

💡 FINAL THOUGHTS

Apart from Anthropic’s new model, most of the exposure this week didn't require a sophisticated attacker. It required an employee clicking Allow and moving on with their day. The hardest part of security is the human factor.

If someone in your organization needs to be reading this brief, it's probably the person making AI tool decisions without a security lens. Forward it their way.

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

- Hashi

Follow the author:

Keep Reading