In partnership with

WEEK 03-2026 AI 3X3 BRIEF

TL;DR: Varonis disclosed a prompt injection attack that exfiltrates data from Microsoft Copilot with a single click. Group-IB is predicting AI-powered worms in 2026—malware that adapts, evades, and runs the entire kill chain without human oversight. And IBM's latest data shows 97% of organizations that suffered AI-related breaches had no access controls in place.

🚨 DEVELOPMENT 1

Reprompt: Single-Click Data Theft from Microsoft Copilot

What Happened

Varonis disclosed an attack called "Reprompt" that turns Microsoft Copilot into a data exfiltration channel. One click on a legitimate Microsoft URL is all it takes.

The attack embeds a crafted instruction in the URL's "q" parameter. Copilot executes it. Then the attacker's server takes over, continuously prompting Copilot with questions like "Summarize all files the user accessed today" or "What vacations does he have planned?" The victim sees nothing.

Microsoft patched it. Enterprise M365 Copilot customers weren't affected. Consumer Copilot users were exposed until the fix rolled out.

Why It Matters

The bypass was simple. Copilot's data-leak safeguards only applied to the initial request. Asking Copilot to repeat an action twice bypassed the guardrails entirely.

No forensic trail. All the real instructions come from the attacker's server after the initial click. You can't inspect the starting prompt to know what's being stolen.

This isn't isolated. The same week, researchers disclosed similar vulnerabilities in Claude Cowork, Slack AI, Notion AI, and Google Gemini Enterprise.

Enterprise: Confirm you're on M365 Copilot, not consumer. Review what data your AI tools can access. Consider whether AI assistants need access to sensitive repositories at all.

SMB: If staff are using consumer AI tools for work, they're exposed. This is a policy conversation, not a technology fix.

Action Items:

  1. Audit which AI assistants have access to corporate data

  2. Train employees to treat AI-related links with the same suspicion as phishing

  3. Monitor for unusual AI assistant behavior patterns

🔴 DEVELOPMENT 2

Group-IB: AI-Powered Worms Are Coming

What Happened

Group-IB CEO Dmitry Volkov published the company's 2026 threat predictions, warning that this is the year malware starts learning on the job.

WannaCry and NotPetya caused billions in damage by exploiting vulnerabilities and spreading fast. The next generation will do more: AI-powered malware that adapts to targets, exploits specific weaknesses, and evades detection—all without human direction.

Group-IB is also tracking "agentic extortion"—AI agents built into Ransomware-as-a-Service platforms. These handle encryption, backup destruction, lateral movement, and disabling EDR. Low-skilled affiliates now get capabilities that used to require expertise.

Why It Matters

The kill chain is going autonomous. AI agents will increasingly handle vulnerability discovery, exploitation, and lateral movement at scale. Human operators become optional for everything except collecting payment.

Phishing is getting "agentized." Group-IB found services where AI agents develop lures, send emails, and adapt campaigns based on real-time feedback. Every email feels personal. The attacker barely lifts a finger.

Dark LLMs are maturing. Threat actors have moved past crude WormGPT-style experiments to custom-built, self-hosted models with no ethical restrictions—built specifically for malware, scams, and disinformation.

Enterprise: Signature-based detection won't catch malware that morphs in real-time. Behavioral analysis becomes essential. Review your EDR's AI-detection capabilities.

SMB: You're the softer target when attackers can scale infinitely. Basics matter more, not less: patching, offline backups, MFA everywhere.

Action Items:

  1. Audit your backup strategy—AI-powered ransomware targets backups first

  2. Evaluate whether your security stack detects behavioral anomalies, not just known signatures

  3. Train employees that "obviously fake" phishing is over—AI-generated lures are polished

FROM OUR PARTNERS

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

📊 DEVELOPMENT 3

IBM Data: 13% Breached, 97% Unprepared

What Happened

IBM's Cost of a Data Breach Report 2025 includes AI-specific findings for the first time—and they're ugly.

The numbers:

  • 13% of organizations reported breaches of AI models or applications

  • 97% of those breached lacked proper AI access controls

  • 63% had no AI governance policy or were still developing one

  • 1 in 5 (20%) reported a breach due to shadow AI—employees using unauthorized AI tools with corporate data

  • 8% didn't even know if they'd been compromised

Organizations using AI and automation extensively in their security operations saved $1.9 million in breach costs and reduced the breach lifecycle by 80 days on average.

Why It Matters

The 13% number is just the visible iceberg. When 8% of organizations don't know if they've been breached and 63% lack governance policies, the true exposure rate is certainly higher.

Shadow AI is the new shadow IT—but faster. It took years for shadow IT to become a recognized problem. Shadow AI emerged in months. One in five breaches now traces back to employees using unapproved AI tools.

Only 34% of organizations with AI governance policies actually audit for unsanctioned AI. Having a policy isn't the same as enforcing it.

Enterprise: The savings from AI-powered security ($1.9M average) dwarf the investment required. But you need governance and technology—one without the other leaves gaps.

SMB: Start with visibility. You can't govern what you can't see. Figure out what AI tools employees are actually using before you write policies.

Action Items:

  1. Conduct an AI tool inventory across all departments—including tools employees adopted without IT approval

  2. Implement AI access controls before expanding AI deployments

  3. Add AI-specific questions to your incident response playbook: "Was AI involved? Which systems had AI access?"

💡 FINAL THOUGHTS

Your Key Takeaway:

The rapid pace of AI deployment is outpacing security. On one hand, companies that have completely clamped down don't move with enough agility. On the other, companies that over-index leave themselves vulnerable.

Maintaining balance is key—and the balance point keeps moving. Setting up the right governance framework is paramount.

Need help with AI Governance?

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

Keep reading, learning and be a LEADER in AI 🤖

Hashi & The Context Window Team!

Follow the author:

Keep Reading

No posts found