In partnership with

WEEK 03-2026 AI 3X3 BRIEF

TL;DR: Zscaler red-teamed enterprise AI systems and found critical vulnerabilities in 100% of them—with a median breach time of 16 minutes. The World Economic Forum's latest data shows 87% of executives now rank AI vulnerabilities as their fastest-growing cyber risk. And a fake AI coding assistant on the VS Code Marketplace reminded everyone what happens when employees install tools IT never approved.

🚨 DEVELOPMENT 1

Zscaler: Every Enterprise AI System They Tested Was Hackable

What Happened

Zscaler released the ThreatLabz 2026 AI Security Report on January 27, based on nearly one trillion AI/ML transactions across 9,000 organizations throughout 2025.

The headline finding: when their red team tested enterprise AI systems under adversarial conditions, 100% had critical vulnerabilities. The median time to first critical failure was 16 minutes. 90% were compromised in under 90 minutes. The fastest? One second.

Meanwhile, AI usage keeps climbing. Activity surged 91% year-over-year across more than 3,400 applications. Data transfers to AI tools jumped 93%, totaling over 18,000 terabytes. ChatGPT alone triggered 410 million data loss prevention violations.

Why It Matters

This validates what OpenAI admitted in December. When they published their efforts to harden ChatGPT Atlas, they acknowledged prompt injection "is unlikely to ever be fully solved." Zscaler just showed what that looks like in practice: systems that break almost immediately under real attack conditions.

Most organizations can't even see the problem. The report found many enterprises still lack a basic inventory of their AI models and embedded AI features. You can't secure what you don't know exists.

AI has become a primary attack vector. Zscaler's EVP of Cybersecurity put it bluntly: AI is "no longer just a productivity tool but a primary vector for autonomous, machine-speed attacks by both crimeware and nation-state" actors.

Enterprise: Start with inventory. Which AI tools touch your data? Which SaaS platforms have embedded AI features running by default? If you can't answer quickly, that's your first project.

SMB: The same tools your team uses—ChatGPT, Grammarly, coding assistants—are moving massive amounts of data. The 410 million DLP violations weren't all from Fortune 500 companies.

Action Items:

  1. Build an AI asset inventory covering standalone tools and embedded features in existing SaaS

  2. Red-team your AI deployments—or assume they're vulnerable until proven otherwise

  3. Implement behavioral monitoring; signature-based detection won't catch attacks that adapt in real-time

🔴 DEVELOPMENT 2

WEF: Executives See AI Risk Clearly—Priorities Are Shifting

What Happened

The World Economic Forum released its Global Cybersecurity Outlook 2026 ahead of Davos, surveying C-suite executives and security leaders across 92 countries.

The numbers: 87% identified AI-related vulnerabilities as the fastest-growing cyber risk over 2025. And 94% expect AI to be the most significant force shaping cybersecurity this year.

But the interesting shift is what executives worry about. In 2025, adversarial AI capabilities topped the list at 47%, with data leaks from GenAI at just 22%. In 2026, that flipped: data leaks (34%) now outrank adversarial capabilities (29%).

Organizations are responding. The share assessing AI security before deployment nearly doubled, from 37% to 64%. Still, roughly a third have no process to validate AI security at all.

Why It Matters

The threat model is maturing. Executives have moved past abstract fears about AI-powered attacks toward concrete concerns about data exposure from their own deployments. That's progress—it means they're thinking about the AI they control, not just the AI attackers might use.

CEOs and CISOs see different risks. CEOs now rank cyber-enabled fraud as their top concern, above ransomware. CISOs still prioritize ransomware and supply chain disruption. This gap matters when budget decisions get made.

Knowing isn't the same as doing. Even with 87% flagging AI vulnerabilities as the fastest-growing risk, a third of organizations deploy AI with no security review. Awareness without action is just anxiety.

Enterprise: Use the WEF data to frame board conversations. "87% of your peers see this as the fastest-growing risk" is a useful sentence when requesting AI security budget.

SMB: The data leak concern applies to you too. Every AI tool your employees use with company data is a potential exposure point. Start with knowing what's in use.

Action Items:

  1. Align CEO and CISO risk priorities—different threat models lead to misallocated resources

  2. Establish pre-deployment AI security reviews if you don't have them

  3. Treat data leak prevention as the primary AI risk, not just adversarial attacks

FROM OUR PARTNERS

Turn AI Into Extra Income

You don’t need to be a coder to make AI work for you. Subscribe to Mindstream and get 200+ proven ideas showing how real people are using ChatGPT, Midjourney, and other tools to earn on the side.

From small wins to full-on ventures, this guide helps you turn AI skills into real results, without the overwhelm.

📊 DEVELOPMENT 3

Moltbot: A Fake AI Assistant Walked Right Into the VS Code Marketplace

What Happened

On January 27, security firm Aikido flagged a malicious extension on Microsoft's official VS Code Marketplace. It was called "ClawdBot Agent - AI Coding Assistant" and claimed to be a free AI coding tool.

It worked as advertised—the AI features were functional. But in the background, the extension silently downloaded and installed ConnectWise ScreenConnect, a legitimate remote access tool, configured to connect to an attacker-controlled server. The moment VS Code launched, the attacker had persistent remote access.

The kicker: Moltbot (the project it impersonated, formerly known as Clawdbot) doesn't have an official VS Code extension. The attackers just claimed the name first.

Microsoft removed the extension quickly. But the same week, researcher Jamieson O'Reilly found hundreds of real Moltbot instances exposed to the internet with unauthenticated admin ports—leaking API keys, OAuth credentials, and full chat histories.

Why It Matters

This is what shadow AI looks like in practice. Moltbot has 85,000+ GitHub stars. It's popular because it's easy to deploy. But "easy to deploy" and "secure by default" are different things. Google's VP of Security Engineering, Heather Adkins, was blunt: "Don't run Clawdbot."

Functional malware avoids suspicion. The fake extension actually worked as an AI assistant. Users had no reason to complain or investigate. This is the playbook: give people what they want while taking what you need.

The supply chain extends to skill libraries. O'Reilly also demonstrated a proof-of-concept attack on MoltHub, Moltbot's skill-sharing repository. He uploaded a package and watched as developers from seven countries downloaded it. The payload was benign, but it could have been anything.

Enterprise: Treat AI tools as privileged integrations. If it has file system access, network access, and runs automatically—it needs the same scrutiny as any other software with those permissions.

SMB: Your developers are installing AI coding assistants. Do you know which ones? Do they know how to verify legitimacy? A five-minute conversation now beats a breach later.

Action Items:

  1. Audit what AI extensions and tools developers have installed in their environments

  2. Block or restrict installation of unapproved AI tools through endpoint management

  3. Train teams to verify official sources—popularity and functional features don't equal legitimacy

💡 FINAL THOUGHTS

Your Key Takeaway:

The gap between AI capability and AI security keeps widening and this is going to be a recurring pattern for a while. For organizations to mitigate this, you have to treat AI as infrastructure that needs governance, not just single applications or tools.

Need help with AI Governance?

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

Keep reading, learning and be a LEADER in AI 🤖

Hashi & The Context Window Team!

Follow the author:

Keep Reading

No posts found