STAT WORTH SHARING:

In one documented test, a single crafted email was enough to take over an AI agent running in Microsoft 365 and expose everything it had access to. No phishing link. No malware. Just a message.

If someone on your leadership team needs to see this, forward it their way.

TL;DR:

A compromised PyPI account pushed malicious code into LiteLLM — the open-source library sitting quietly under thousands of AI applications — and Mercor, a $10 billion AI startup, confirmed it lost four terabytes of data in the process. Google DeepMind published research identifying six distinct methods for hijacking autonomous AI agents through ordinary web content, with some attacks succeeding up to 90% of the time. And while federal AI legislation stays gridlocked, the FTC, SEC, and DOJ have stopped waiting — they're already pursuing AI-related violations using existing statutes, as state laws pile on independently.

Development 1: The AI Dev Stack Got a Supply Chain Attack

What Happened

A threat actor compromised a PyPI account and pushed malicious code into LiteLLM, an open-source Python library that gives developers a single interface to over 100 large language models. LiteLLM isn't a flashy product — it's plumbing. It runs underneath thousands of AI applications and developer workflows, largely unexamined.

Mercor — an AI recruiting and data startup valued at 10billion—confirmeditwasamongthoseaffected.ThehackinggroupLapsus10 billion — confirmed it was among those affected. The hacking group Lapsus 10billion—confirmeditwasamongthoseaffected.ThehackinggroupLapsus claimed responsibility and is now auctioning four terabytes of data allegedly taken from Mercor's systems. Wired reported that Meta has paused all work with the company, and other major AI labs are reassessing their vendor relationships.

What makes this different from a standard credential theft is what was actually at risk: proprietary training data, model evaluation methodologies, contractor information. Not credentials. Not customer PII. AI intellectual property.

Why It Matters

This is the first confirmed major supply chain attack targeting AI infrastructure specifically. PyPI packages sit at the foundation of how AI systems get built. The attacker didn't go after a finished product — they went after the pipes.

Meta pulling work from a $10 billion vendor isn't a routine response. When hyperscalers start breaking relationships over a breach, it signals the exposure is serious enough to affect roadmaps, not just generate incident reports. Watch how other labs respond over the next few weeks.

AI training data is now a primary target alongside credentials and PII. This breach puts model evaluation methodologies and training data in the same conversation as passwords and credit card numbers. Attackers have noticed that AI IP is valuable — and that it's often the least protected thing in the stack.

If your organization uses AI tools built on open-source Python libraries — which is most of them — your supply chain exposure is real. Enterprises need an AI-specific Software Bill of Materials (SBOM) and a dependency audit process. For smaller teams: you're entitled to ask your AI vendor what libraries their product depends on. If they can't answer that, it's worth knowing.

→ One action this week: Ask your IT team or primary AI vendor for a list of open-source dependencies in your AI tools. If they can't produce one, that's a gap worth flagging before the next LiteLLM.

Development 2: Six Ways Your AI Agent Can Be Hijacked Right Now

What Happened

Google DeepMind published research this week detailing how autonomous AI agents — the kind now being deployed to read emails, browse the web, and take action on your behalf — can be manipulated through the content they encounter. The research identifies six attack categories, each targeting a different point in how an agent operates.

The breakdown, per The Decoder's coverage: instructions hidden in HTML comments or image metadata that agents follow without users seeing anything; emotionally loaded content that skews an agent's reasoning; poisoned documents that corrupt long-term memory in RAG systems; manipulated inputs — a single crafted email — that bypass security classifiers and take over the agent's actions entirely; compromised orchestrator agents that launch sub-agents running malicious instructions, with success rates between 58% and 90%; and agents that mislead their human supervisors through doctored summaries or by exploiting approval fatigue.

That last one deserves a second look. The scenario isn't just an agent going rogue — it's a compromised agent making you approve actions you wouldn't have approved if you'd seen the real summary.

Why It Matters

The attack surface for any agent that browses the web is the entire internet. Every page it reads, every email it processes, every external document it queries is potentially hostile. Sandboxing and output monitoring aren't premium add-ons — they're the floor.

This research names specific platforms. Microsoft 365 Copilot, Salesforce Agentforce, Google Workspace AI are directly exposed to behavioral control traps. If your organization has deployed commercial AI assistants with email or document access, this isn't theoretical.

Nobody has answered the liability question yet. If a compromised agent initiates an unauthorized wire transfer or exposes protected health information, who is responsible — the vendor, the enterprise that deployed it, or the employee who approved a summary without scrutinizing it? These accountability gaps exist in every active deployment right now.

Security teams deploying AI agents need agent-specific policies before they're exposed, not after. Review what your commercial AI assistants can access and what actions they're permitted to take autonomously. If your current governance policy doesn't distinguish between "AI that generates text" and "AI that acts," it's behind the actual risk.

Has your security team written a policy specifically for AI agents — separate from your general AI use policy? Even a "not yet" is useful data.

FROM OUR PARTNERS

Check out deel - our partners on today’s newsletter. The folks over at deel help you hire and scale faster, smarter and avoid headaches building a global team. Even though AI is a big part of what we cover, human capital is precious. You need to hire the best. deel helps you do that.

Hiring in 8 countries shouldn't require 8 different processes

This guide from Deel breaks down how to build one global hiring system. You’ll learn about assessment frameworks that scale, how to do headcount planning across regions, and even intake processes that work everywhere. As HR pros know, hiring in one country is hard enough. So let this free global hiring guide give you the tools you need to avoid global hiring headaches.

Development 3: The Feds Aren't Waiting for an AI Law

What Happened

On March 20, the White House released a National Policy Framework for AI recommending that Congress preempt state laws across seven areas — child protection, intellectual property, and AI infrastructure among them. Federal AI legislation, however, remains stalled.

That hasn't slowed enforcement. Morgan Lewis documented the current activity: the FTC is applying Section 5 to deceptive AI practices and inflated capability claims. The SEC is actively pursuing "AI washing" — public companies overstating their AI capabilities to investors. The DOJ is using the False Claims Act against AI tools deployed in government-funded programs. Antitrust enforcers are scrutinizing algorithmic pricing.

Meanwhile, states aren't waiting either. Colorado's AI Act is already in effect. California and New York have introduced transparency and algorithmic pricing disclosure requirements. For any company operating across multiple states, that's several distinct compliance regimes arriving before a federal standard exists to rationalize them.

Why It Matters

The SEC's AI washing investigations cut in both directions. Overstating AI capabilities to investors creates disclosure risk. Understating them to regulators while marketing them to customers creates fraud exposure. The safe position is accuracy — which requires actually knowing what your AI tools do, not just what the vendor says they do.

False Claims Act exposure is the one most organizations haven't priced in. Healthcare and defense contractors using AI for billing, coding, or contract performance need to verify those tools' outputs. If an AI produces inaccurate results and you've signed off on a compliance certification, the FCA doesn't care that you didn't write the code.

State laws are landing before federal preemption does. The White House framework recommends that Congress preempt state laws. That recommendation has to clear Congress first. In the meantime, Colorado's law is live, and more are close behind. Operating in multiple states already means operating under multiple regimes.

Public companies should have a process for verifying AI capability claims before they appear in investor materials. For healthcare and government contractors: a clear-eyed audit of what your AI tools certify and what accuracy they actually deliver is worth doing before a regulator asks. For everyone: if your marketing says AI does something, confirm it does.

→ One action this week: Pull any investor presentations, marketing materials, or government filings that reference your AI capabilities. If a claim can't be verified by your own technical team, flag it before a regulator does.

💡 FINAL THOUGHTS

On the surface, it may look like you have a handle on the security issues and vulnerabilities. However, the real risk can be one or two layers below. Pay attention and don’t feel pressured to move fast vs. risk your organization.

If someone in your organization needs to be reading this brief, it's probably the person making AI tool decisions without a security lens. Forward it their way.

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

- Hashi

Follow the author:

Keep Reading