TL;DR:
Every enterprise software vendor in your inbox right now is selling an "AI agent." Most of them are not. Gartner estimates that of the thousands of vendors claiming agentic AI capabilities, roughly 130 are legitimate — meaning about 95% of what's being marketed as an "agent" is a chatbot with a rebrand and a price increase. The market has a name for this: agent washing. And it's about to get expensive. Gartner is predicting that more than 40% of agentic AI projects will be canceled by the end of 2027 — not because the technology doesn't work, but because most organizations can't tell the difference between what does.
What Actually Makes an Agent an Agent
Before getting into how vendors are abusing the term, it helps to know what it actually means.
A genuine AI agent perceives what's happening in its environment, reasons about what to do, selects the right tools, takes action, and adjusts when things don't go as planned — all without being told each step. That last part is the critical piece. An AI assistant waits for you to ask it something. An AI agent pursues a goal.
The distinction matters because the operational complexity — and the risk — scales dramatically as you move from one to the other. A chatbot that drafts emails is a useful tool. An agent that reads your inbox, schedules meetings, flags contract anomalies, and files expense reports operates with far more independence, touches far more systems, and can cause far more damage when it goes wrong.
Gartner defines the dividing line clearly: genuine agents feature "goal decomposition, dynamic tool use, memory, and adaptive behavior." What most vendors are shipping is scripted automation that follows predetermined paths, with a language model bolted on top so the interface feels conversational. The workflow is fixed. The intelligence is cosmetic.
What the Vendor Pitch Actually Looks Like
There's a reason this is happening now. "Agentic AI" is the hottest category in enterprise software, and the definition is vague enough to exploit. There's no universal standard, no certification, no audit. So vendors do what vendors always do when a hot label appears with no gatekeeping: they apply it to everything.
A marketing automation tool that sequences emails becomes an "agentic marketing system." An RPA workflow that follows a decision tree becomes an "intelligent agent." A customer service chatbot that routes tickets to a human becomes an "autonomous agent." The underlying software didn't change. The pitch deck did.
The tells are consistent. When a vendor demo shows an agent completing a task but skips over why it made each decision, that's a scripted workflow, not autonomous reasoning. When they struggle to describe what happens when the agent encounters a scenario it hasn't seen before, that's because the answer is "it fails" — and scripted tools don't adapt. When the sales deck leads with "fully autonomous" and then quietly mentions the need for "ongoing human oversight at key decision points," read that footnote carefully.
HFS Research put it plainly in a 2025 analysis: "Vendors are rebadging copilots as 'agents' to imply autonomy and impact, yet most merely offer text to action inside existing workflows." Their label for it — agentic-washing — is as good as any.

Regulatory crackdown on mislabeling AI and its capabilities. Source: HFS Research, 2025
The Numbers Behind the Noise
The market data running underneath all of this is worth sitting with for a moment.
Gartner's June 2025 analysis found only about 130 genuine agentic AI vendors in a market crowded with thousands of claimants. That same research predicted that 40%+ of agentic AI projects will be canceled by 2027, driven primarily by escalating costs, unclear ROI, and inadequate risk controls — the precise combination you'd expect when organizations buy tools that can't do what was promised.
Meanwhile, a March 2026 industry survey found that 72% of Global 2000 companies now operate AI agent systems beyond the experimental phase, and 86% of organizations plan to increase AI budgets this year. Those two facts together should give any executive pause. Most large organizations are committing real money to this category. Most of what's being sold in this category isn't what's being advertised.
The downstream effect is beginning to show up in security data, too. A 2026 Gravitee survey found that only 24.4% of organizations have full visibility into which AI agents are communicating with each other. A separate analysis found that 88% of organizations reported confirmed or suspected AI agent security incidents in the last year. That's not a coincidence. When you deploy something believing it has specific, bounded capabilities and it turns out to behave differently — or connect to systems you didn't anticipate — you've created exposure you didn't know you had.
From Our Partners
This issue is supported by Attio. If you're rethinking how AI fits into your operations, your CRM is probably the first place to look
Attio is the AI CRM for modern teams.
Connect your email and calendar and Attio instantly builds your CRM. Every contact, every company, every conversation — organized in one place. Then ask it anything. No more digging, no more data entry. Just answers.
The Vendor Isn't the Problem. You Are
Here's the argument worth making clearly: the agent washing phenomenon isn't primarily a vendor ethics problem. It's a buyer literacy problem. And fixing it is your responsibility, not theirs.
Vendors operate within the rules of the market. The market right now rewards the label, not the outcome. Calling your product an AI agent gets you in the door. Delivering on that claim is a problem for the implementation team six months from now. The incentive structure is not going to self-correct. The only correction available to you is knowing what questions to ask before you sign anything.
There are three things a real agent can do that a chatbot wearing an agent costume cannot. First: it can show its reasoning. Ask any vendor to demonstrate step-by-step decision logic — not the outcome, the logic. Genuine agents can expose how they arrived at a choice. Rule-based automation cannot, because there's no reasoning to show. Second: it can handle novelty. Ask what happens when the agent encounters a scenario it wasn't built for. Genuine agents adapt. Scripted tools fail. The vendor's answer to that question is revealing regardless of how it's phrased. Third: it can fail gracefully. A genuine agent has defined guardrails and knows when to escalate to a human. An automation tool relabeled as an agent usually just does the wrong thing confidently.
The companies that will get actual ROI from agentic AI in 2026 are not the ones spending the most. They're the ones who identified a specific, bounded problem, bought a tool genuinely capable of solving it, and built the governance layer before they deployed — not after.
That last part is where most organizations are behind. A 2026 analysis found that while 82% of executives report confidence in their existing policies protecting against unauthorized agent actions, only 14.4% of organizations actually send agents to production with full security or IT approval. Policy confidence and policy enforcement are not the same thing.Does your organization touch healthcare — as a provider, payer, or vendor? Are you already seeing AI show up in your administrative workflows, or is your compliance team still the bottleneck?
Have you started evaluating AI agent vendors? What questions are you asking — and what answers are making you nervous?
Hit reply. I'm collecting these for a follow-up piece and I read every response.
What to Do With This Practically
Start with your current stack before you buy anything new. If your organization already has "AI agents" deployed — through your CRM, your customer service platform, your ERP vendor — take thirty minutes to understand what they're actually doing. Are they taking autonomous action, or are they surfacing recommendations that a human approves? Neither answer is wrong, but knowing which one you have changes how you govern it.
When evaluating new vendors, build three questions into every demo. Ask them to show you a case where the agent encountered an unexpected input and explain exactly how it responded. Ask them to describe the full list of systems and APIs the agent can access. Ask them what the agent cannot do, and listen to whether that answer is specific or vague. A vendor who knows their product's limits is more credible than one promising unlimited capability.
Finally, decide what success actually looks like before you buy. Gartner's most pointed finding wasn't that agentic AI doesn't work — it was that organizations are deploying it without knowing how they'd measure whether it's working. "Improved efficiency" is not a metric. The number of hours saved per week is. The reduction in error rate on a specific process is. Define the measurement before you approve the budget, and you'll have an honest answer in three months instead of a canceled project in eighteen.
Final Thoughts
"Agent washing" borrows its name from greenwashing, and the parallel holds. Companies slap a label on something to make a purchase decision feel more forward-thinking than it is. The reputational cost of greenwashing took years to land. The financial cost of agent washing will move faster, because you'll see it in the P&L before you see it in the press.
Buy the outcome, not the category.
We are out of tokens for this week's context window!✋
- Hashi
Follow Hashi:
X at @hashisiva | LinkedIn




