In partnership with

WEEK 02-2026 AI 3X3 BRIEF

TL;DR: A critical ServiceNow vulnerability showed what happens when AI agents get bolted onto legacy authentication, the EU AI Act compliance deadline is now seven months away with U.S. state laws adding complexity, and Harvard Business Review published research arguing conventional cybersecurity is structurally inadequate for AI systems.

🚨 DEVELOPMENT 1

ServiceNow's "BodySnatcher" Vulnerability: AI Agents Meet Legacy Security

The 3 Key Points:

1. What Happened: AppOmni researchers disclosed CVE-2025-12420, a critical vulnerability in ServiceNow's AI-powered Virtual Agent and Now Assist platform. Rated 9.3 on the CVSS scale, the flaw allowed unauthenticated attackers to impersonate any user—including administrators—using only an email address and a hardcoded credential shared across the entire platform. No phishing required. No user interaction. Dark Reading called it "the most severe AI-driven vulnerability uncovered to date." ServiceNow patched hosted instances in October 2025, with public disclosure this week.

2. Why It Matters: This wasn't some exotic zero-day. It was a basic authentication failure amplified by AI agent permissions. The AI didn't create the vulnerability—it made the vulnerability catastrophic. Attackers could instruct the AI agent to override security controls, create backdoor accounts with full privileges, and pivot to connected systems like Salesforce and Microsoft 365. When 85% of Fortune 500 companies use ServiceNow, that's not a platform problem—it's a supply chain problem.

3. What You Need to Know:

For Enterprises: Confirm your ServiceNow instance is patched (Now Assist AI Agents version 5.1.18+ or 5.2.19+, Virtual Agent API version 3.15.2+ or 4.0.4+). If you deployed Now Assist before October 30, conduct a forensic review for persistent access. Audit AI agent permissions across your entire SaaS stack—not just ServiceNow.

For SMBs: If you use any SaaS platform with AI features bolted on, recognize that new AI capabilities introduce novel attack vectors legacy security can't handle. Demand transparency from vendors about how their AI agents authenticate and what permissions they hold.

FROM OUR PARTNERS

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

🔴 DEVELOPMENT 2

The Compliance Clock: EU AI Act and U.S. State Law Fragmentation

The 3 Key Points:

1. What Happened: The Council on Foreign Relations published an analysis from six AI policy experts declaring 2026 the year AI governance becomes mandatory reality. The European Union's AI Act requirements for "high-risk" AI systems take full effect in August 2026, with penalties up to €35 million or 7% of global annual turnover. Meanwhile, Illinois, Colorado, and California have all implemented their own AI-specific regulations, with Colorado recently pushing its deadline to June 30. The CFR analysis describes implementing this patchwork as "devilishly difficult."

2. Why It Matters: The grace period is over. The EU's definition of "high-risk" is broad enough that most companies using AI for HR decisions, credit assessments, or access control fall under it. The U.S. has no federal AI law, but a recent executive order signals potential preemption of state laws—which remains a legal fight that hasn't happened yet. Plan for the laws that exist, not the ones that might get overturned. The EU Commission is considering pushing the high-risk deadline to December 2027, but "maybe delayed" is not a compliance strategy.

3. What You Need to Know:

For Enterprises: Begin classifying your AI systems now to determine which fall under "high-risk" categories. The EU requires extensive documentation and risk assessments—and those take longer than you think. Budget for navigating conflicting requirements across the EU AI Act and multiple U.S. state laws.

For SMBs: Any business with customers or operations in the EU is subject to the same penalties, making compliance a potential existential threat. The cost of state-by-state compliance may force hard decisions about which markets to serve. Consider external compliance expertise if you lack internal resources.

📊 DEVELOPMENT 3

Harvard Business Review: Your Security Model Wasn't Built for AI

The 3 Key Points:

1. What Happened: Harvard Business Review published research arguing that conventional cybersecurity is structurally inadequate for AI systems. The piece opens with "EchoLeak"—a June 2025 vulnerability that extracted sensitive Microsoft 365 Copilot data through a zero-click exploit requiring no user interaction. Author Hugo Huang argues legacy security models were designed for predictable software systems and application-layer defenses, not the dynamic, interconnected nature of AI infrastructure.

2. Why It Matters: Most organizations are trying to secure AI at the application layer. The research says the real vulnerabilities are in the infrastructure—GPUs, drivers, system-level software underneath everything. You're locking the front door while the foundation has cracks. New attack vectors like data poisoning and model inversion don't look like traditional breaches. They don't trip existing alerts. They corrupt how AI learns and operates at a level most security teams aren't monitoring.

3. What You Need to Know:

For Enterprises: Your AI strategy may be built on an insecure foundation. Vulnerabilities in the hardware and cloud infrastructure you rely on require a fundamental rethinking of security architecture—shifting focus from the application layer to securing the entire AI supply chain, from hardware to data pipelines. Frameworks like the NIST AI Risk Management Framework (AI RMF) are a starting point.

For SMBs: If you're using popular AI tools like ChatGPT or Copilot, you're implicitly trusting the security of underlying infrastructure this research shows is vulnerable. Shadow AI compounds the problem—over 80% of workers use unapproved AI tools, feeding sensitive data into systems without oversight. Prioritize employee training and clear AI usage policies.

🎯 ACTION PLAN

Your Key Action This Week:

1. ServiceNow Users: Verify patch status. If you're self-hosted or have custom configurations, confirm you're running the patched versions. If you deployed Now Assist before October 30, run forensic analysis.

2. Compliance Teams: Inventory every AI system you use—including third-party tools with AI features. Classify which fall under EU "high-risk" categories (HR, lending, access control, safety systems). Start documentation now.

3. Security Teams: Map your AI supply chain, not just your AI applications. Implement an AI acceptable use policy if you haven't already. Audit what AI tools employees are actually using versus what's sanctioned.

💡 FINAL THOUGHTS

Your Key Takeaway:

The speed of AI deployment has outrun our ability to secure it, govern it, and understand it.

None of this is reason to stop using AI but focus on getting the foundations right.

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

Keep reading, learning and be a LEADER in AI 🤖

Hashi & The Context Window Team!

Follow the author:

Keep Reading

No posts found