WEEK 52 AI 3X3 BRIEF
Welcome to Week 52's AI Security 3x3 Brief—the last of 2025. Happy holidays, everyone!🎄
TL;DR: OpenAI just admitted that prompt injection attacks may never be fully solved, IBM's latest data shows shadow AI is now responsible for 20% of breaches and adds $670K to cleanup costs, and the Cloud Security Alliance found that only a quarter of organizations have comprehensive AI governance in place.
🚨 DEVELOPMENT 1
OpenAI Says Some AI Attacks May Never Be Fixable
The 3 Key Points:
1. What Happened: On December 22, OpenAI published a blog post detailing its efforts to harden its ChatGPT Atlas browser against cyberattacks. The admission buried in the announcement: "Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully 'solved.'" This echoed the UK's National Cyber Security Centre (NCSC) warning from earlier this month that prompt injection "may never be totally mitigated" and could lead to breaches exceeding the SQL injection era of the 2010s.
2. Why It Matters: This isn't a vendor downplaying a bug. It's an architectural confession. LLMs can't reliably distinguish between instructions and data—every token is potentially a command. Unlike SQL injection, which was eventually tamed through parameterized queries, there's no equivalent fix on the horizon for prompt injection. The NCSC called LLMs "inherently confusable." OpenAI's response isn't to solve the problem but to build an internal "LLM-based automated attacker" that hunts for vulnerabilities faster than external hackers can find them. That's a treadmill, not a cure.
3. What You Need to Know: Rami McCarthy, principal security researcher at cloud security firm Wiz, framed the risk equation as "autonomy × access." Agentic AI systems—browsers, coding assistants, workflow automators—sit in the worst quadrant: moderate autonomy with very high access. OpenAI recommends users limit what agents can access and avoid giving broad instructions like "take whatever action is needed." In other words: the defense is restricting the AI's usefulness.
For Enterprises: Treat prompt injection as a permanent design constraint, not a patchable vulnerability. Any AI system with access to sensitive data, credentials, or external communications should have explicit guardrails limiting autonomous actions. Build human-in-the-loop checkpoints for high-stakes operations. Monitor for anomalous behavior patterns that suggest injection attempts.
For SMBs: Be cautious about adopting agentic AI tools that request broad permissions. If an AI assistant wants access to your email, calendar, and payment systems, ask what happens when it gets tricked. Favor tools that require confirmation before taking consequential actions. The convenience isn't worth the exposure.
FROM OUR PARTNERS
Clear communicators aren't lucky. They have a system.
Here's an uncomfortable truth: your readers give you about 26 seconds.
Smart Brevity is the methodology born in the Axios newsroom — rooted in deep respect for people's time and attention. It works just as well for internal comms, executive updates, and change management as it does for news.
We've bundled six free resources — checklists, workbooks, and more — so you can start applying it immediately.
The goal isn't shorter. It's clearer. And clearer gets results.
🔴 DEVELOPMENT 2
Shadow AI Now Causes 20% of All Data Breaches
The 3 Key Points:
1. What Happened: IBM's 2025 Cost of a Data Breach Report, covered in CIO's analysis on December 17, quantified what security teams have been warning about: shadow AI—employees using unapproved AI tools—now accounts for 20% of all data breaches. Organizations with high levels of shadow AI paid an average of $670,000 more per breach than those with low levels or none. The report surveyed 600 organizations across 17 industries between March 2024 and February 2025.
2. Why It Matters: Shadow AI has displaced security skills shortages as one of the top three factors driving up breach costs. Nearly 60% of employees use unapproved AI tools at work, and they're feeding those tools sensitive data. IBM found that 27% of organizations report more than 30% of AI-processed data contains private or confidential information. The breach data tells the story: shadow AI incidents compromised customer PII in 65% of cases and intellectual property in 40%—both higher than global averages.
3. What You Need to Know: Employees aren't using shadow AI to be malicious. They're doing it because approved tools are slow, inadequate, or nonexistent. IBM's recommendation: "If the official tools are better than the shadow ones, employees will use them." The problem isn't just policy enforcement—it's that many organizations haven't provided alternatives worth using.
For Enterprises: Deploy discovery tools to identify what AI applications are actually in use across your organization. Most don't know. Then address the gap: either sanction tools that meet security requirements or provide better alternatives. Automate the approval process—if vetting a new AI tool takes weeks, employees will skip it.
For SMBs: Create a simple, fast process for employees to request AI tool reviews. A form and a 48-hour turnaround beats a policy nobody follows. Focus on the basics: Does this tool need access to customer data? Does it store our information? Where? Train employees on why shadow AI is risky—not as a scare tactic, but because a single breach can sink a small business.
📊 DEVELOPMENT 3
Only 25% of Organizations Have Real AI Governance
The 3 Key Points:
1. What Happened: The Cloud Security Alliance released its State of AI Security and Governance report on December 24. The headline finding: governance maturity is the single strongest predictor of whether an organization feels confident in its ability to secure AI systems. The problem is that only about 25% of organizations have comprehensive AI security governance in place. The rest are operating on partial guidelines, policies still in development, or nothing at all.
2. Why It Matters: Organizations with mature governance show tighter alignment between boards, executives, and security teams. They train staff on AI security. They have structured approval processes for AI deployments. They're confident. Everyone else is guessing. The report also found a disconnect between what organizations fear and what they're doing about it: sensitive data exposure ranked as the top concern, but model-specific risks like prompt injection and data poisoning received less attention. That's a gap between knowing the risk and actually addressing it.
3. What You Need to Know: Security teams are stepping into AI adoption earlier than other functions—testing AI in detection, investigation, and response. But ownership models remain fragmented. More than half of respondents said security teams own AI protection, yet deployment decisions are spread across IT, dedicated AI teams, and business units. Governance without clear ownership is just documentation.
For Enterprises: If you don't have a cross-functional AI governance committee, you're in the 75% flying blind. Establish one with representation from security, legal, IT, and business units. Define who owns what. Move security involvement to the earliest stages of AI projects—not as a checkpoint at the end, but as a partner in design.
For SMBs: Governance doesn't require a committee. Start with a one-page AI use policy: what's allowed, what's not, and who to ask when it's unclear. Appoint someone—even if they wear multiple hats—to be the go-to resource for AI questions. Focus your scrutiny on any AI that touches customer data. That's where the real risk lives.
🎯 ACTION PLAN
Your Key Actions From This Newsletter:
Conduct a shadow AI audit. Survey department heads: What AI tools are people actually using? Cross-reference against your approved list.
The gap between those two lists is your exposure.
For anything unauthorized that's widely adopted, make a decision: sanction it with proper controls, provide a better alternative, or block it with clear communication about why.
Ignoring it can have uncapped losses when things go wrong.
💡 FINAL THOUGHTS
Your Key Takeaway:
AI isn’t dangerous and you should embrace adopting it when the value is clear. Just like any other aspect of life, without appropriate safety measures, things can go wrong. There will always be a level of threat and vulnerability and this will continue to evolve. We are used to this. It’s nothing new and not anything to be afraid of.
Get ahead of governance and security, but maintain a balance. A lack of balance creates more shadow AI within organizations.
How helpful was this week's email?
We are out of tokens for this week's security brief. ✋
Keep reading, learning and be a LEADER in AI 🤖
Hashi & The Context Window Team!
Follow the author:
X at @hashisiva | LinkedIn




