In partnership with

WEEK 06-2026 AI 3X3 BRIEF

TL;DR: The OpenClaw agent we covered last week? SecurityScorecard found 135,000+ exposed instances, many exploitable via remote code execution. A popular AI chat app left 300 million private messages from 25 million users in an unsecured database. And new analysis from the AI Incident Database shows deepfake fraud has gone industrial—cheap enough for anyone to deploy at scale.

🚨 DEVELOPMENT 1

OpenClaw: 135,000+ Exposed AI Agents and Counting

What Happened

Last week's newsletter covered what OpenClaw is and why it matters. This is the security update.

SecurityScorecard's STRIKE team published research on February 9 identifying over 40,000 internet-facing OpenClaw instances. By the time The Register wrote it up hours later, that number had passed 135,000. It's still climbing.

The core problem: OpenClaw binds to all network interfaces by default—including the public internet—unless you manually restrict it. Most users didn't. Three high-severity CVEs have been disclosed with public exploit code, including one (CVE-2026-25253) allowing full gateway takeover from a single malicious link. STRIKE found 63% of deployments vulnerable, 12,800+ exploitable via RCE, and many traced to corporate IP space.

Cisco analyzed 31,000 community-built "skills" and found 26% had at least one vulnerability. A popular skill called "What Would Elon Do?" turned out to be functional malware—artificially pumped to the #1 ranking.

Why It Matters

Convenience defaults are dangerous defaults. Open-by-default network binding on a tool with shell access and credential storage is how mass exposure happens. STRIKE put it bluntly: "What looks like convenience is actually a concentration of risk."

90% of instances are running outdated versions. STRIKE found 90.3% of detected deployments still identify as "Clawdbot" or "Moltbot"—deployed during the viral period and never updated. The patches exist. Nobody's applying them.

The skill store is a supply chain risk. A quarter of community extensions have vulnerabilities. The most popular ones can be gamed to the top. It's an app store without any of the vetting infrastructure actual app stores provide.

Enterprise: If anyone experimented with OpenClaw on company hardware, treat it as active exposure. API keys and OAuth tokens on those machines are at risk.

SMB: An employee installs an open-source AI agent, connects it to their email and calendar, and your corporate data becomes internet-accessible. Ask the question before the breach forces it.

Action Items:

  1. Scan your network for traffic on port 18789—the default OpenClaw listener

  2. If instances are found, audit what credentials and API keys are accessible on those machines

  3. Update your acceptable use policy to address locally-hosted AI agents, not just cloud-based tools

🔴 DEVELOPMENT 2

Chat & Ask AI: 300 Million Private Messages Left in the Open

What Happened

An independent security researcher discovered that Chat & Ask AI—one of the most popular AI apps on both app stores, with 50 million+ downloads—left its backend database wide open.

The researcher, who goes by Harry, accessed roughly 300 million messages from more than 25 million users before disclosing the vulnerability to the developer, Codeway.

Chat & Ask AI is a "wrapper" app. It doesn't run its own AI—it connects users to ChatGPT, Claude, and Gemini while handling the storage itself. That's where it failed. The app used Google Firebase with security rules set to public, meaning anyone with the project URL could read the entire database without authentication.

The exposed data included full chat histories, timestamps, and data from other Codeway apps. 404 Media reported the content included requests for suicide methods, instructions for illegal activities, and detailed medical and financial information. Codeway patched it within hours of disclosure.

Why It Matters

The AI models weren't breached. The app around them was. OpenAI, Anthropic, and Google weren't compromised here. The third-party wrapper storing conversations was. The risk isn't the model—it's the middleman.

People treat AI chats like journals. Users share things with chatbots they wouldn't say out loud. When the storage layer fails, that intimacy becomes a liability.

Firebase misconfigurations are not new—and that's the point. An app with 50 million downloads shipped with security rules set to public. That tells you everything about the gap between growth speed and security investment in the AI app market.

Enterprise: Employees using third-party AI wrapper apps route corporate data through systems with unknown security postures. The models may be enterprise-grade. The apps accessing them often aren't.

SMB: Find out which AI chat apps your team uses for work—especially mobile wrappers. The popular ones aren't necessarily the secure ones.

Action Items:

  1. Inventory which AI chat apps employees use for work—especially mobile wrapper apps

  2. Clarify policy on third-party AI apps with corporate data, even when the underlying model is from a trusted provider

  3. Check Harry's Firehound registry for apps your organization uses

FROM OUR PARTNERS

Turn AI into Your Income Engine

Ready to transform artificial intelligence from a buzzword into your personal revenue generator?

HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.

Inside you'll discover:

  • A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential

  • Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background

  • Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve

Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.

📊 DEVELOPMENT 3

Deepfake Fraud Goes Industrial

What Happened

The AI Incident Database published new analysis showing deepfake fraud has moved from niche experiments to industrial-scale operations.

The numbers: deepfake video scams surged 700% over three years. Eight million deepfake files were shared in 2025, up from 500,000 two years earlier. UK consumers lost £9.4 billion to AI-powered fraud in nine months. Deloitte projects AI-driven fraud losses will hit $40 billion by 2027.

The incidents aren't hypothetical. A finance officer at a Singaporean multinational paid nearly $500,000 to scammers during what he believed was a video call with leadership—every person on the call was a deepfake. The CEO of an AI security firm nearly hired a fake engineer after a video interview, only catching it through detection software.

Why It Matters

The old red flags are gone. Bad grammar, inconsistent photos, inability to do a video call—deepfake tools have eliminated every one of them. Scammers now produce real-time video convincing enough to fool executives.

This isn't just consumer fraud—it's an enterprise attack vector. Deepfake candidates in job interviews. Deepfake executives approving wire transfers. Deepfake vendors requesting payment changes. Each of these has already happened.

Fraud is now the dominant category of AI harm. The AI Incident Database found that frauds and scams have been the largest category of reported incidents in 11 of the past 12 months. Not an emerging trend. The main event.

Enterprise: Review verification procedures for high-value transactions initiated by video or voice. If a deepfake can trigger it, you need out-of-band confirmation. No exceptions.

SMB: Hiring and financial approvals are your two biggest exposure points. Video interviews alone are no longer sufficient identity verification. Payments above a threshold need confirmation through a separate channel.

Action Items:

  1. Implement out-of-band verification for wire transfers and vendor payment changes—regardless of how convincing the video call looks

  2. Add deepfake awareness to employee training: video calls are no longer proof of identity

  3. Add verification steps beyond video to your remote hiring process

💡 FINAL THOUGHTS

Speed is outrunning scrutiny. That's normal—and there's no cause to panic or hit the pause button. Just make sure security is a pillar you keep anchored to. The ROI on AI is becoming more tangible, but a security compromise can erode your returns pretty quickly.

Need help with AI Security?

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

Keep reading, learning and be a LEADER in AI 🤖

Hashi & The Context Window Team!

Follow the author:

Keep Reading