In partnership with

WEEK 09-2026 AI 3X3 BRIEF

TL;DR: A new survey of 250 security leaders found that 98% are slowing AI agent deployments—and only 1 in 5 organizations feels prepared for an agent-based attack. The U.S. Treasury released its Financial Services AI Risk Management Framework: 230 control objectives that give auditors something concrete to check against. And while enterprises hesitate at the top, employees are quietly bringing their own AI agents to work—a category of shadow IT that acts autonomously, runs around the clock, and is harder to govern than anything that came before it.

🚨 DEVELOPMENT 1

The CISO Verdict on Agentic AI: Not Yet

What Happened

On February 25, Apono published its 2026 State of Agentic AI Cyber Risk Report, based on a survey of 250 senior security professionals across North America, Europe, and the Middle East. The findings are blunt.

  • 98% say security and data concerns have already slowed, scoped down, or added friction to agentic AI projects

  • 100% agree that an attack targeting agentic AI workflows would be more damaging than a traditional cyberattack

  • Only 21% say their organization feels prepared to handle such an attack

Every respondent agrees agent-based attacks would be more damaging than traditional ones. One in five thinks they're ready for one.

Apono's CEO put it plainly: "There's a lot of talk about AI agents rapidly taking over enterprise workflows, but on the ground, CISOs are pressing the brakes." The bottleneck isn't enthusiasm or budget—it's identity and access. Most organizations haven't sorted out access governance for their human employees, and now they're being asked to extend that to AI systems operating autonomously.

Why It Matters

Vendors are moving faster than security teams can respond. Agentic features are shipping continuously. The tension between adoption pressure and security readiness is now documented at scale.

The preparedness gap is a board-level liability. When something goes wrong, the question won't be which agent caused it—it'll be who approved its access and what controls were in place.

"We're piloting it" isn't a governance posture. Pilots without defined identity boundaries, access scoping, and audit trails are just undocumented deployments.

Enterprise: Every agent needs an owner, a defined permission scope, and time-bound access. If you can't answer "what can this agent touch and who approved it," it shouldn't be in production.

SMB: If you've deployed any AI assistant with access to company data—email, files, CRM—verify that its access is actually scoped. Default access settings are rarely the right ones.

Action Items:

  1. Pull an inventory of every AI agent or AI-connected workflow currently in your environment—including tools employees adopted without formal approval

  2. For any agent in production: confirm who owns it, what data it can access, and whether that access is time-bound

  3. Add an agentic AI tabletop scenario to your incident response exercises this quarter—before you need it

🔴 DEVELOPMENT 2

Washington Hands Finance a 230-Count AI Rulebook

What Happened

On February 19, the U.S. Department of the Treasury released two new documents as part of the President's AI Action Plan: a shared AI Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF). Developed in coordination with over 100 financial institutions, the Cyber Risk Institute, and the Financial Services Sector Coordinating Council, the FS AI RMF is not a policy statement or a set of principles. It's a matrix of 230 control objectives.

Those objectives span governance, data practices, model development, validation, monitoring, third-party risk, and consumer protection. Many of them map to specific system behaviors, ownership assignments, and evidence artifacts—meaning auditors and regulators can check against them. Treasury released the AI Lexicon alongside the framework to standardize terminology across legal, risk, engineering, and compliance teams. That standardization detail matters more than it sounds: one of the most consistent failure modes in AI governance is that legal, IT, and operations aren't speaking the same language.

This is the first of six planned deliverables Treasury intends to release this month.

Why It Matters

This is the shift from directional to operational. Prior AI frameworks—including NIST's—offered principles. The FS AI RMF gives examiners 230 specific control objectives to hold institutions against. Legal analysts are already describing it as an architecture standard, not guidance.

Finance gets the framework first; other sectors get the precedent. When Treasury sets 230 controls for banking, it establishes a benchmark that healthcare, energy, and manufacturing will eventually be measured against—whether they're formally regulated under it or not.

The terminology problem is underestimated. Organizations where legal, IT, and operations define "AI model," "agent," and "governance" differently will struggle to demonstrate compliance—not because they're noncompliant, but because they can't describe their own posture consistently.

Enterprise: If you're in financial services, the 230 control objectives are your starting point. If you're not, treat this as a preview. The smart move is mapping your AI governance gaps now, before a sector-specific version lands.

SMB: Treasury designed the framework to scale across institution sizes. Small financial firms aren't exempt. Neither, eventually, will their vendors and suppliers be.

Action Items:

  1. Download the FS AI RMF and run a gap assessment against your current AI governance documentation—specifically around model validation, third-party AI risk, and access controls

  2. Standardize your internal AI terminology before your next board or regulator interaction; misaligned language is a compliance liability hiding in plain sight

  3. If you have AI deployed in any customer-facing or decision-making capacity, identify who is accountable for monitoring and validating its behavior—that ownership needs to be explicit and documented

FROM OUR PARTNERS

Attio is the AI CRM for modern teams.

Connect your email and calendar, and Attio instantly builds your CRM. Every contact, every company, every conversation, all organized in one place.

Then Ask Attio anything:

  • Prep for meetings in seconds with full context from across your business

  • Know what’s happening across your entire pipeline instantly

  • Spot deals going sideways before they do

No more digging and no more data entry. Just answers.

📊 DEVELOPMENT 3

Employees Are Running Unsanctioned AI Agents. That's a Different Problem Than Shadow IT.

What Happened

This week, ArmorCode launched AI Exposure Management (AIEM), a platform capability designed to give enterprises visibility into where AI is actually running across their environments—including AI no one officially approved.

The launch framed a distinction worth understanding: shadow AI has changed. The first wave was passive—employees pasting proprietary data into public chatbots. Problematic, but a single discrete exposure. The newer risk is what the industry is starting to call BYOA: Bring Your Own Agent.

BYOA is employees running personal AI agent platforms for work tasks without IT oversight. Unlike a chatbot, these agents operate continuously, hold persistent memory, connect to email and messaging apps, and take autonomous action. The exposure isn't a one-time paste. It's an ongoing process with corporate credentials attached—running whether the employee is at their desk or not.

Nearly 80% of ArmorCode's Fortune 500 and Fortune 1000 customers are now pushing for visibility into agents, MCP servers, and shadow AI. That's not a feature request—it's an acknowledgment that the problem already exists in their environments.

Why It Matters

The risk category changed, not just the scale. An unauthorized chatbot creates a data risk. An unauthorized agent creates an operational risk—it acts, modifies records, and communicates externally on a continuous basis.

The adoption curve mirrors BYOD and shadow IT, but compressed. Employees used Dropbox before cloud storage policy existed. They used personal phones before MDM existed. Agent tools are following the same path, faster.

Visibility has to come before policy. Organizations that discover a BYOA problem through a breach are in a materially different position than those that found it proactively.

🏢 Enterprise: Each unsanctioned personal agent an employee runs on hardware touching corporate systems is a nonhuman identity your IAM tools don't account for. Get the inventory before writing the policy.

🪤 SMB: Employees are almost certainly experimenting—not maliciously, but because the tools are useful and there's no clear guidance saying not to. Get ahead of that conversation before an incident makes it unavoidable.

Action Items:

  1. Survey your team on AI tool usage—specifically tools they use independently for work tasks; anonymized surveys tend to surface more than audits

  2. Before writing an AI use policy, establish what "sanctioned" looks like: which tools are approved, at what permission levels, and with what oversight

  3. If BYOA incidents are a realistic threat in your environment, add agent-based misuse scenarios to your security training—employees need to understand why an unsanctioned agent is different from an unsanctioned app

💡 FINAL THOUGHTS

Security leaders are getting clearer on what AI risk actually looks like. The harder problem is balancing the pressure to move fast with the governance needed to do it safely — too much caution stifles transformation, too little creates exposure. That balance is the work.

Need help with AI Security?

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's security brief.

Keep reading, learning and be a LEADER in AI 🤖

Hashi & The Context Window Team!

Follow the author:

Keep Reading