WEEK 08-2026 AI 3X3 BRIEF
TL;DR: ESET discovered the first Android malware that uses Google's Gemini AI during execution—asking the model in real time how to stay alive on an infected device. IBM's 2026 X-Force Threat Intelligence Index dropped this week showing 300,000+ stolen ChatGPT credentials on the dark web and a 44% jump in attacks exploiting public-facing apps. And Cisco's State of AI Security 2026 puts a number on the gap: 83% of organizations planned to deploy agentic AI, but only 29% were ready to secure it.
🚨 DEVELOPMENT 1
PromptSpy: Android Malware That Asks Gemini How to Survive
What Happened
ESET published research on February 19 detailing what they call the first Android malware to use generative AI during execution. They named it PromptSpy.
Once installed, the malware sends Google's Gemini a prompt along with a dump of everything currently displayed on the device's screen—button labels, positions, element types. Gemini responds with JSON instructions: tap here, swipe there. PromptSpy executes the action, grabs the updated screen, and sends it back. This loop continues until the AI confirms the app has been pinned in the recent apps list, keeping it alive across reboots.
The trick is clever because every Android manufacturer handles "lock app in recents" differently. A hardcoded script breaks across devices. Gemini reads the screen and figures it out.
Beyond persistence, PromptSpy deploys a VNC module for full remote device control—capturing lockscreen PINs, recording screens, and blocking uninstallation with invisible overlays on buttons like "stop" and "Uninstall." It appears to impersonate Chase Bank and target users in Argentina. ESET hasn't seen it in their telemetry yet—possibly still a proof of concept—but the distribution website exists and the samples are functional.
Why It Matters
This is the proof of concept everyone expected—and it works. ESET already found AI-powered ransomware (PromptLock) last August. PromptSpy is the second. The fact that AI is only used for one function makes it easy to dismiss. Don't. The technique is modular. Next time, AI handles more of the chain.
Generative AI makes malware device-agnostic. Traditional Android malware breaks when it hits an unfamiliar screen layout. PromptSpy doesn't care what phone you have. ESET researcher Lukáš Štefanko: generative AI enables threat actors to "adapt to more or less any device, layout, or OS version."
Google's own model is being weaponized against Android users. Google told The Hacker News that Play Protect blocks known versions. But the model is called via API with a stolen key. The malware doesn't need to be on the Play Store to reach victims.
Enterprise: Confirm Google Play Protect is enabled on managed devices. Review MDM policies around Accessibility Services permissions—PromptSpy can't function without them.
SMB: Sideloaded apps remain the primary Android attack vector. If employees use personal Android devices for work, Accessibility Services permissions and app installation sources matter.
Action Items:
Restrict Accessibility Services permissions on managed Android devices via MDM
Block sideloading on corporate devices—managed app stores only
Add AI-assisted mobile malware to your team's threat awareness briefing
🔴 DEVELOPMENT 2
IBM X-Force 2026: 300K Stolen ChatGPT Logins and a 44% Surge in App Exploits
What Happened
IBM released its 2026 X-Force Threat Intelligence Index on February 25—their annual look at what's actually happening in the field, not predictions but incident data.
The AI-specific headline: infostealer malware harvested over 300,000 ChatGPT credential sets in 2025, all advertised on the dark web. IBM's assessment: AI platforms now carry the same credential risk as any other enterprise SaaS tool. Stolen chatbot logins aren't just about account access—attackers can manipulate outputs, inject prompts, and pull sensitive data from conversation histories.
The broader numbers: attacks exploiting public-facing applications jumped 44%, driven by missing authentication and AI-enabled vulnerability scanning. Vulnerability exploitation now causes 40% of all incidents. Supply chain compromises have nearly quadrupled since 2020. Active ransomware groups surged 49%.
IBM's Mark Hughes: "Attackers aren't reinventing playbooks, they're speeding them up with AI."
Why It Matters
Your AI tools are now credential targets. If employees use ChatGPT, Claude, Gemini, or any AI platform with a login, those credentials are being actively harvested by infostealers. Password reuse between personal and enterprise accounts means a stolen consumer ChatGPT login can become an enterprise access path.
AI is compressing the time between discovery and exploitation. The 44% increase in app-based attacks isn't because there are more vulnerabilities. It's because AI tools help attackers find and exploit them faster. The window between "vulnerable" and "compromised" is shrinking.
Supply chain risk is accelerating, partly because of AI-generated code. IBM specifically calls out AI coding tools introducing unvetted code into pipelines—code that works but hasn't been reviewed for security. If your developers ship AI-generated code without security review, you're contributing to the trend.
Enterprise: Treat AI platform credentials with the same rigor as any other SaaS login. Enforce SSO and MFA. Include AI platforms in your credential monitoring and dark web scanning.
SMB: If employees use personal AI tool accounts for work, those accounts are part of your risk surface. Enforce unique passwords and add AI tools to your next security awareness conversation.
Action Items:
Add AI platforms (ChatGPT, Copilot, Gemini, Claude) to your credential monitoring program
Enforce MFA and SSO on enterprise AI tool access—treat them like any other SaaS app
Include AI-generated code in your security review process before it ships
FROM OUR PARTNERS
How can AI power your income?
Ready to transform artificial intelligence from a buzzword into your personal revenue generator
HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.
Inside you'll discover:
A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential
Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background
Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve
Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.
📊 DEVELOPMENT 3
Cisco: 83% Deploy AI Agents, Only 29% Can Secure Them
What Happened
Cisco published its State of AI Security 2026 report with fresh data on how the agentic AI wave is playing out in practice.
The core finding: 83% of organizations had planned to deploy agentic AI into business functions. Only 29% reported being ready to do so securely. That's a 54-point gap.
The report catalogs attack vectors now showing up in production. MCP (Model Context Protocol)—the standard connecting AI agents to external tools—is actively being exploited: tool poisoning, remote code execution, overprivileged access, supply chain tampering. In one case, a fake npm package mimicking an email integration silently forwarded messages to an attacker.
Agent-to-agent communication is creating new identity risks. Cisco describes compromised research agents inserting hidden instructions into output that financial agents consume—resulting in unintended transactions. And on the supply chain side, researchers demonstrated that injecting just 250 poisoned documents into training data can implant backdoors activated by trigger phrases, with no visible impact on model performance.
Why It Matters
The 54-point readiness gap is the story. Organizations aren't deploying cautiously and securing later. They're deploying at scale while admitting they can't protect what they're building. That gap doesn't close gradually. It closes when something breaks.
Agent-to-agent trust is the next identity crisis. We know how to authenticate humans and services. We don't have frameworks for authenticating AI agents talking to other AI agents—especially when those agents can be manipulated through their inputs. What Zenity Labs presented at Black Hat last year is now showing up in production environments.
Poisoned training data is a supply chain risk most teams can't detect. You can audit npm packages. You can review code. Auditing whether 250 out of millions of training documents were tampered with? That's a challenge most security teams aren't staffed for.
Enterprise: Treat the Cisco readiness gap as a benchmark. Can your team answer: What data can each agent access? What actions can it take? Who's accountable when it makes a mistake? If not, governance comes before scaling.
SMB: If your team connects AI tools to business apps through integrations, verify what those integrations actually do. A Slack integration that silently copies messages is just as dangerous at 20 employees as at 20,000.
Action Items:
Map your AI agent deployments—what each agent can access, what it can do, and through which integrations
Audit MCP connections and third-party AI integrations for overprivileged access
Establish accountability frameworks for agentic AI before scaling further
💡 FINAL THOUGHTS
We continue to provide AI and AI tools with greater access and trust. Given that AI's one directive is to learn and get smarter, it means that it can outsmart us and do so quickly and effectively. We need to continue to monitor closely and work on developing better guardrails for next year, not just today.
Need help with AI Security?
Check out → DigiForm-AI-Governance
How helpful was this week's email?
We are out of tokens for this week's security brief. ✋
Keep reading, learning and be a LEADER in AI 🤖
Hashi & The Context Window Team!
Follow the author:
X at @hashisiva | LinkedIn




